Test Report: Docker_Linux_crio 22168

                    
                      9b787847521167b42f6debd67da4dc2d018928d7:2025-12-17:42812
                    
                

Test fail (26/415)

x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-401977 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-401977 addons disable volcano --alsologtostderr -v=1: exit status 11 (253.191887ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:06:46.260607   25953 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:06:46.260805   25953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:06:46.260818   25953 out.go:374] Setting ErrFile to fd 2...
	I1217 00:06:46.260825   25953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:06:46.261160   25953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:06:46.261497   25953 mustload.go:66] Loading cluster: addons-401977
	I1217 00:06:46.261962   25953 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:06:46.261986   25953 addons.go:622] checking whether the cluster is paused
	I1217 00:06:46.262135   25953 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:06:46.262153   25953 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:06:46.262725   25953 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:06:46.284383   25953 ssh_runner.go:195] Run: systemctl --version
	I1217 00:06:46.284441   25953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:06:46.302152   25953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:06:46.392279   25953 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:06:46.392381   25953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:06:46.420262   25953 cri.go:89] found id: "ad54e02660b07cbde6d493c7d5e3ed172475b94a9eaee4e87d4bd9ef151c0b22"
	I1217 00:06:46.420302   25953 cri.go:89] found id: "45b944b097394c01a6a9f73b7481a21c99e516a6329609ae554a14bb17a1b0c4"
	I1217 00:06:46.420306   25953 cri.go:89] found id: "96a32ce198ee5fea679ca8aa4c1ec23792c97d3131fb348ba7d61057703f8b98"
	I1217 00:06:46.420309   25953 cri.go:89] found id: "6060f042efdda63b0e493f156e033554f81a8024186ecd9b75e89903f49cc5a6"
	I1217 00:06:46.420313   25953 cri.go:89] found id: "cfb270351b7717448c0caad78a981c83e200c3645e7ea23795af66b940e7f694"
	I1217 00:06:46.420319   25953 cri.go:89] found id: "60a76e334b179e66cc6937fdcf120c474c69221436fe9732a138ce177b409c81"
	I1217 00:06:46.420323   25953 cri.go:89] found id: "404a83db71038ead5c1b120d09189bcde64f22a00bc12c772da57ed50d0b4e31"
	I1217 00:06:46.420329   25953 cri.go:89] found id: "7477be1e8e83d8e93db214a41f2cbe2dac4702f15bafed422689a1ad41a282ee"
	I1217 00:06:46.420333   25953 cri.go:89] found id: "88b4569360ba98b30f65b36287d6c38a51676cebffa1785dc5414861aa1a0629"
	I1217 00:06:46.420348   25953 cri.go:89] found id: "7ad73ae76171d2d2105f4ccfe0862424948abc4be7d39a9f3d3999660c222211"
	I1217 00:06:46.420356   25953 cri.go:89] found id: "e77e53ca2e567bf130231e95ef7e993ca42c0bc61aab6ffced345e9e69c005cc"
	I1217 00:06:46.420361   25953 cri.go:89] found id: "d5d932c1082d35b313501328162c7f2f663374a7e3c58c4c2b1114359e9493df"
	I1217 00:06:46.420369   25953 cri.go:89] found id: "5d7bc94a6e7622d199da43ca8e9942b4e849ae76949d0554b9d548a510dd26ce"
	I1217 00:06:46.420374   25953 cri.go:89] found id: "b3c5366ec83c76413d2675b009b0c59e92e94121bf53dd83abb96b2fc0bd58b7"
	I1217 00:06:46.420381   25953 cri.go:89] found id: "dfd9e15edab91358da5ee7de7e20baaf2f8b820f9507af61b26dcbf0be9749ac"
	I1217 00:06:46.420395   25953 cri.go:89] found id: "a380c22257b5cfb547f66e134e244b2c1d6bd55bad431f846b76089ef28f6a89"
	I1217 00:06:46.420404   25953 cri.go:89] found id: "f6e58bb2900bb7013f6f81ccca2250cb1b6547be3edcc33d4bc867ae9d0b4072"
	I1217 00:06:46.420411   25953 cri.go:89] found id: "383049ced70e6adc7b2ef0d1a415cd527a2670898bfb2873c9b8955afffff3eb"
	I1217 00:06:46.420416   25953 cri.go:89] found id: "840301d1bb594051e430e85719c2707ed97013c7e3269f84012213ab768d9935"
	I1217 00:06:46.420424   25953 cri.go:89] found id: "950dc7c477829d5fc62b7e10ed2edf92016de18feb8bc6c8d8262fbf28097b78"
	I1217 00:06:46.420432   25953 cri.go:89] found id: "c85efeb6af746eaf16f9b1ef2458c5065693555ece6e3b595a07ccc7b8c2e6d9"
	I1217 00:06:46.420439   25953 cri.go:89] found id: "f55d4645a3da61635311b8471dafe61926de73f6c6575bcdc112a086cfde666a"
	I1217 00:06:46.420444   25953 cri.go:89] found id: "a9fb6926bb935bb35c23586c7a59d3ecc32fdac56e6508767e75f4b3b5db4340"
	I1217 00:06:46.420451   25953 cri.go:89] found id: "a5dab92a052f84df44c207e5dd5c238be41faadb22b92368fa8135c2af2fd265"
	I1217 00:06:46.420455   25953 cri.go:89] found id: ""
	I1217 00:06:46.420507   25953 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:06:46.433281   25953 out.go:203] 
	W1217 00:06:46.434451   25953 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:06:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:06:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:06:46.434471   25953 out.go:285] * 
	* 
	W1217 00:06:46.437728   25953 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:06:46.438801   25953 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-401977 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.185272ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-z62qp" [bddc8662-c9eb-4392-837b-010328dd2e70] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00382359s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-58fhj" [9b3ecc60-f54f-46fd-8a40-a56e4574bb5b] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003112453s
addons_test.go:394: (dbg) Run:  kubectl --context addons-401977 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-401977 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-401977 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.076487235s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-401977 ip
2025/12/17 00:07:07 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-401977 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-401977 addons disable registry --alsologtostderr -v=1: exit status 11 (229.713912ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:07:07.582716   28627 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:07:07.582853   28627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:07:07.582862   28627 out.go:374] Setting ErrFile to fd 2...
	I1217 00:07:07.582867   28627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:07:07.583085   28627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:07:07.583345   28627 mustload.go:66] Loading cluster: addons-401977
	I1217 00:07:07.583628   28627 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:07:07.583645   28627 addons.go:622] checking whether the cluster is paused
	I1217 00:07:07.583718   28627 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:07:07.583729   28627 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:07:07.584138   28627 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:07:07.602503   28627 ssh_runner.go:195] Run: systemctl --version
	I1217 00:07:07.602562   28627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:07:07.618770   28627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:07:07.709799   28627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:07:07.709887   28627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:07:07.737817   28627 cri.go:89] found id: "ad54e02660b07cbde6d493c7d5e3ed172475b94a9eaee4e87d4bd9ef151c0b22"
	I1217 00:07:07.737837   28627 cri.go:89] found id: "45b944b097394c01a6a9f73b7481a21c99e516a6329609ae554a14bb17a1b0c4"
	I1217 00:07:07.737841   28627 cri.go:89] found id: "96a32ce198ee5fea679ca8aa4c1ec23792c97d3131fb348ba7d61057703f8b98"
	I1217 00:07:07.737845   28627 cri.go:89] found id: "6060f042efdda63b0e493f156e033554f81a8024186ecd9b75e89903f49cc5a6"
	I1217 00:07:07.737848   28627 cri.go:89] found id: "cfb270351b7717448c0caad78a981c83e200c3645e7ea23795af66b940e7f694"
	I1217 00:07:07.737852   28627 cri.go:89] found id: "60a76e334b179e66cc6937fdcf120c474c69221436fe9732a138ce177b409c81"
	I1217 00:07:07.737855   28627 cri.go:89] found id: "404a83db71038ead5c1b120d09189bcde64f22a00bc12c772da57ed50d0b4e31"
	I1217 00:07:07.737858   28627 cri.go:89] found id: "7477be1e8e83d8e93db214a41f2cbe2dac4702f15bafed422689a1ad41a282ee"
	I1217 00:07:07.737860   28627 cri.go:89] found id: "88b4569360ba98b30f65b36287d6c38a51676cebffa1785dc5414861aa1a0629"
	I1217 00:07:07.737868   28627 cri.go:89] found id: "7ad73ae76171d2d2105f4ccfe0862424948abc4be7d39a9f3d3999660c222211"
	I1217 00:07:07.737872   28627 cri.go:89] found id: "e77e53ca2e567bf130231e95ef7e993ca42c0bc61aab6ffced345e9e69c005cc"
	I1217 00:07:07.737875   28627 cri.go:89] found id: "d5d932c1082d35b313501328162c7f2f663374a7e3c58c4c2b1114359e9493df"
	I1217 00:07:07.737883   28627 cri.go:89] found id: "5d7bc94a6e7622d199da43ca8e9942b4e849ae76949d0554b9d548a510dd26ce"
	I1217 00:07:07.737886   28627 cri.go:89] found id: "b3c5366ec83c76413d2675b009b0c59e92e94121bf53dd83abb96b2fc0bd58b7"
	I1217 00:07:07.737889   28627 cri.go:89] found id: "dfd9e15edab91358da5ee7de7e20baaf2f8b820f9507af61b26dcbf0be9749ac"
	I1217 00:07:07.737902   28627 cri.go:89] found id: "a380c22257b5cfb547f66e134e244b2c1d6bd55bad431f846b76089ef28f6a89"
	I1217 00:07:07.737911   28627 cri.go:89] found id: "f6e58bb2900bb7013f6f81ccca2250cb1b6547be3edcc33d4bc867ae9d0b4072"
	I1217 00:07:07.737916   28627 cri.go:89] found id: "383049ced70e6adc7b2ef0d1a415cd527a2670898bfb2873c9b8955afffff3eb"
	I1217 00:07:07.737919   28627 cri.go:89] found id: "840301d1bb594051e430e85719c2707ed97013c7e3269f84012213ab768d9935"
	I1217 00:07:07.737921   28627 cri.go:89] found id: "950dc7c477829d5fc62b7e10ed2edf92016de18feb8bc6c8d8262fbf28097b78"
	I1217 00:07:07.737928   28627 cri.go:89] found id: "c85efeb6af746eaf16f9b1ef2458c5065693555ece6e3b595a07ccc7b8c2e6d9"
	I1217 00:07:07.737931   28627 cri.go:89] found id: "f55d4645a3da61635311b8471dafe61926de73f6c6575bcdc112a086cfde666a"
	I1217 00:07:07.737934   28627 cri.go:89] found id: "a9fb6926bb935bb35c23586c7a59d3ecc32fdac56e6508767e75f4b3b5db4340"
	I1217 00:07:07.737937   28627 cri.go:89] found id: "a5dab92a052f84df44c207e5dd5c238be41faadb22b92368fa8135c2af2fd265"
	I1217 00:07:07.737939   28627 cri.go:89] found id: ""
	I1217 00:07:07.737979   28627 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:07:07.751124   28627 out.go:203] 
	W1217 00:07:07.752229   28627 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:07:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:07:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:07:07.752250   28627 out.go:285] * 
	* 
	W1217 00:07:07.755198   28627 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:07:07.756412   28627 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-401977 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.52s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.39s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 2.774183ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-401977
addons_test.go:334: (dbg) Run:  kubectl --context addons-401977 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-401977 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-401977 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (237.119354ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:07:04.987809   28276 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:07:04.988095   28276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:07:04.988105   28276 out.go:374] Setting ErrFile to fd 2...
	I1217 00:07:04.988110   28276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:07:04.988302   28276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:07:04.988571   28276 mustload.go:66] Loading cluster: addons-401977
	I1217 00:07:04.988882   28276 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:07:04.988900   28276 addons.go:622] checking whether the cluster is paused
	I1217 00:07:04.988979   28276 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:07:04.989006   28276 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:07:04.989382   28276 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:07:05.007657   28276 ssh_runner.go:195] Run: systemctl --version
	I1217 00:07:05.007719   28276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:07:05.026006   28276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:07:05.116900   28276 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:07:05.117005   28276 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:07:05.147659   28276 cri.go:89] found id: "ad54e02660b07cbde6d493c7d5e3ed172475b94a9eaee4e87d4bd9ef151c0b22"
	I1217 00:07:05.147694   28276 cri.go:89] found id: "45b944b097394c01a6a9f73b7481a21c99e516a6329609ae554a14bb17a1b0c4"
	I1217 00:07:05.147700   28276 cri.go:89] found id: "96a32ce198ee5fea679ca8aa4c1ec23792c97d3131fb348ba7d61057703f8b98"
	I1217 00:07:05.147705   28276 cri.go:89] found id: "6060f042efdda63b0e493f156e033554f81a8024186ecd9b75e89903f49cc5a6"
	I1217 00:07:05.147709   28276 cri.go:89] found id: "cfb270351b7717448c0caad78a981c83e200c3645e7ea23795af66b940e7f694"
	I1217 00:07:05.147715   28276 cri.go:89] found id: "60a76e334b179e66cc6937fdcf120c474c69221436fe9732a138ce177b409c81"
	I1217 00:07:05.147719   28276 cri.go:89] found id: "404a83db71038ead5c1b120d09189bcde64f22a00bc12c772da57ed50d0b4e31"
	I1217 00:07:05.147723   28276 cri.go:89] found id: "7477be1e8e83d8e93db214a41f2cbe2dac4702f15bafed422689a1ad41a282ee"
	I1217 00:07:05.147728   28276 cri.go:89] found id: "88b4569360ba98b30f65b36287d6c38a51676cebffa1785dc5414861aa1a0629"
	I1217 00:07:05.147740   28276 cri.go:89] found id: "7ad73ae76171d2d2105f4ccfe0862424948abc4be7d39a9f3d3999660c222211"
	I1217 00:07:05.147748   28276 cri.go:89] found id: "e77e53ca2e567bf130231e95ef7e993ca42c0bc61aab6ffced345e9e69c005cc"
	I1217 00:07:05.147753   28276 cri.go:89] found id: "d5d932c1082d35b313501328162c7f2f663374a7e3c58c4c2b1114359e9493df"
	I1217 00:07:05.147757   28276 cri.go:89] found id: "5d7bc94a6e7622d199da43ca8e9942b4e849ae76949d0554b9d548a510dd26ce"
	I1217 00:07:05.147767   28276 cri.go:89] found id: "b3c5366ec83c76413d2675b009b0c59e92e94121bf53dd83abb96b2fc0bd58b7"
	I1217 00:07:05.147771   28276 cri.go:89] found id: "dfd9e15edab91358da5ee7de7e20baaf2f8b820f9507af61b26dcbf0be9749ac"
	I1217 00:07:05.147787   28276 cri.go:89] found id: "a380c22257b5cfb547f66e134e244b2c1d6bd55bad431f846b76089ef28f6a89"
	I1217 00:07:05.147796   28276 cri.go:89] found id: "f6e58bb2900bb7013f6f81ccca2250cb1b6547be3edcc33d4bc867ae9d0b4072"
	I1217 00:07:05.147803   28276 cri.go:89] found id: "383049ced70e6adc7b2ef0d1a415cd527a2670898bfb2873c9b8955afffff3eb"
	I1217 00:07:05.147807   28276 cri.go:89] found id: "840301d1bb594051e430e85719c2707ed97013c7e3269f84012213ab768d9935"
	I1217 00:07:05.147812   28276 cri.go:89] found id: "950dc7c477829d5fc62b7e10ed2edf92016de18feb8bc6c8d8262fbf28097b78"
	I1217 00:07:05.147816   28276 cri.go:89] found id: "c85efeb6af746eaf16f9b1ef2458c5065693555ece6e3b595a07ccc7b8c2e6d9"
	I1217 00:07:05.147824   28276 cri.go:89] found id: "f55d4645a3da61635311b8471dafe61926de73f6c6575bcdc112a086cfde666a"
	I1217 00:07:05.147829   28276 cri.go:89] found id: "a9fb6926bb935bb35c23586c7a59d3ecc32fdac56e6508767e75f4b3b5db4340"
	I1217 00:07:05.147834   28276 cri.go:89] found id: "a5dab92a052f84df44c207e5dd5c238be41faadb22b92368fa8135c2af2fd265"
	I1217 00:07:05.147841   28276 cri.go:89] found id: ""
	I1217 00:07:05.147890   28276 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:07:05.162874   28276 out.go:203] 
	W1217 00:07:05.164009   28276 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:07:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:07:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:07:05.164029   28276 out.go:285] * 
	* 
	W1217 00:07:05.166921   28276 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:07:05.168205   28276 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-401977 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.39s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (147.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-401977 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-401977 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-401977 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [b42a6040-94a7-49a6-9fab-d1c2ceb84407] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [b42a6040-94a7-49a6-9fab-d1c2ceb84407] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003399399s
I1217 00:07:05.088902   16354 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-401977 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-401977 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.617286474s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-401977 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-401977 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-401977
helpers_test.go:244: (dbg) docker inspect addons-401977:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "219e112c500a63f9336b2666157863d4cfe597753815d1bf3cb7dc7b0552a566",
	        "Created": "2025-12-17T00:05:07.512571798Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 18755,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:05:07.543088192Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/219e112c500a63f9336b2666157863d4cfe597753815d1bf3cb7dc7b0552a566/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/219e112c500a63f9336b2666157863d4cfe597753815d1bf3cb7dc7b0552a566/hostname",
	        "HostsPath": "/var/lib/docker/containers/219e112c500a63f9336b2666157863d4cfe597753815d1bf3cb7dc7b0552a566/hosts",
	        "LogPath": "/var/lib/docker/containers/219e112c500a63f9336b2666157863d4cfe597753815d1bf3cb7dc7b0552a566/219e112c500a63f9336b2666157863d4cfe597753815d1bf3cb7dc7b0552a566-json.log",
	        "Name": "/addons-401977",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-401977:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-401977",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "219e112c500a63f9336b2666157863d4cfe597753815d1bf3cb7dc7b0552a566",
	                "LowerDir": "/var/lib/docker/overlay2/2b92b8898f7a98811215bd838566dddb1002cf7f5fcff05d32154ccc0b9fec51-init/diff:/var/lib/docker/overlay2/594b812fd6d8db89dab322ea9e00d43dd555e9709fb5e6953e3873cce717392c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2b92b8898f7a98811215bd838566dddb1002cf7f5fcff05d32154ccc0b9fec51/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2b92b8898f7a98811215bd838566dddb1002cf7f5fcff05d32154ccc0b9fec51/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2b92b8898f7a98811215bd838566dddb1002cf7f5fcff05d32154ccc0b9fec51/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-401977",
	                "Source": "/var/lib/docker/volumes/addons-401977/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-401977",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-401977",
	                "name.minikube.sigs.k8s.io": "addons-401977",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9785f6653fa784d761c28e50fadc7c676d001811f421ddb0a68f5cdc441e1c28",
	            "SandboxKey": "/var/run/docker/netns/9785f6653fa7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-401977": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d27b277831966c02ef98dd516e15594caf20e2a10cfc9f62c3b9efd8d57b5104",
	                    "EndpointID": "8e84a2d47fdb52334c0de6e1447539e385c2616740d34e464939f0b8856e1892",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "6a:ff:fb:40:c5:84",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-401977",
	                        "219e112c500a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-401977 -n addons-401977
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-401977 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-401977 logs -n 25: (1.085354592s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-602188 --alsologtostderr --binary-mirror http://127.0.0.1:39411 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-602188 │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │                     │
	│ delete  │ -p binary-mirror-602188                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-602188 │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │ 17 Dec 25 00:04 UTC │
	│ addons  │ enable dashboard -p addons-401977                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-401977        │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │                     │
	│ addons  │ disable dashboard -p addons-401977                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-401977        │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │                     │
	│ start   │ -p addons-401977 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-401977        │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │ 17 Dec 25 00:06 UTC │
	│ addons  │ addons-401977 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-401977        │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │                     │
	│ addons  │ addons-401977 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-401977        │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │                     │
	│ addons  │ enable headlamp -p addons-401977 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-401977        │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │                     │
	│ addons  │ addons-401977 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-401977        │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │                     │
	│ addons  │ addons-401977 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-401977        │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │                     │
	│ addons  │ addons-401977 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-401977        │ jenkins │ v1.37.0 │ 17 Dec 25 00:07 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-401977                                                                                                                                                                                                                                                                                                                                                                                           │ addons-401977        │ jenkins │ v1.37.0 │ 17 Dec 25 00:07 UTC │ 17 Dec 25 00:07 UTC │
	│ addons  │ addons-401977 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-401977        │ jenkins │ v1.37.0 │ 17 Dec 25 00:07 UTC │                     │
	│ ssh     │ addons-401977 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-401977        │ jenkins │ v1.37.0 │ 17 Dec 25 00:07 UTC │                     │
	│ ip      │ addons-401977 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-401977        │ jenkins │ v1.37.0 │ 17 Dec 25 00:07 UTC │ 17 Dec 25 00:07 UTC │
	│ addons  │ addons-401977 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-401977        │ jenkins │ v1.37.0 │ 17 Dec 25 00:07 UTC │                     │
	│ addons  │ addons-401977 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-401977        │ jenkins │ v1.37.0 │ 17 Dec 25 00:07 UTC │                     │
	│ addons  │ addons-401977 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-401977        │ jenkins │ v1.37.0 │ 17 Dec 25 00:07 UTC │                     │
	│ addons  │ addons-401977 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-401977        │ jenkins │ v1.37.0 │ 17 Dec 25 00:07 UTC │                     │
	│ addons  │ addons-401977 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-401977        │ jenkins │ v1.37.0 │ 17 Dec 25 00:07 UTC │                     │
	│ ssh     │ addons-401977 ssh cat /opt/local-path-provisioner/pvc-9efeedf4-dfa0-403b-ae36-a3e2e8cb966e_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-401977        │ jenkins │ v1.37.0 │ 17 Dec 25 00:07 UTC │ 17 Dec 25 00:07 UTC │
	│ addons  │ addons-401977 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-401977        │ jenkins │ v1.37.0 │ 17 Dec 25 00:07 UTC │                     │
	│ addons  │ addons-401977 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-401977        │ jenkins │ v1.37.0 │ 17 Dec 25 00:07 UTC │                     │
	│ addons  │ addons-401977 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-401977        │ jenkins │ v1.37.0 │ 17 Dec 25 00:07 UTC │                     │
	│ ip      │ addons-401977 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-401977        │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:04:44
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:04:44.708836   18113 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:04:44.709100   18113 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:04:44.709110   18113 out.go:374] Setting ErrFile to fd 2...
	I1217 00:04:44.709114   18113 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:04:44.709301   18113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:04:44.709757   18113 out.go:368] Setting JSON to false
	I1217 00:04:44.710564   18113 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2835,"bootTime":1765927050,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:04:44.710613   18113 start.go:143] virtualization: kvm guest
	I1217 00:04:44.712392   18113 out.go:179] * [addons-401977] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:04:44.713943   18113 notify.go:221] Checking for updates...
	I1217 00:04:44.713953   18113 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:04:44.715239   18113 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:04:44.716596   18113 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:04:44.717964   18113 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:04:44.719149   18113 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:04:44.720344   18113 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:04:44.721583   18113 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:04:44.743479   18113 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:04:44.743619   18113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:04:44.794226   18113 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-17 00:04:44.784960384 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:04:44.794358   18113 docker.go:319] overlay module found
	I1217 00:04:44.796075   18113 out.go:179] * Using the docker driver based on user configuration
	I1217 00:04:44.797251   18113 start.go:309] selected driver: docker
	I1217 00:04:44.797265   18113 start.go:927] validating driver "docker" against <nil>
	I1217 00:04:44.797275   18113 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:04:44.797826   18113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:04:44.848741   18113 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-17 00:04:44.840098046 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:04:44.848967   18113 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:04:44.849183   18113 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:04:44.850838   18113 out.go:179] * Using Docker driver with root privileges
	I1217 00:04:44.851955   18113 cni.go:84] Creating CNI manager for ""
	I1217 00:04:44.852029   18113 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:04:44.852040   18113 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 00:04:44.852098   18113 start.go:353] cluster config:
	{Name:addons-401977 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-401977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1217 00:04:44.853505   18113 out.go:179] * Starting "addons-401977" primary control-plane node in "addons-401977" cluster
	I1217 00:04:44.854637   18113 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:04:44.855973   18113 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:04:44.857141   18113 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:04:44.857169   18113 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1217 00:04:44.857178   18113 cache.go:65] Caching tarball of preloaded images
	I1217 00:04:44.857220   18113 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:04:44.857268   18113 preload.go:238] Found /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 00:04:44.857279   18113 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 00:04:44.857597   18113 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/config.json ...
	I1217 00:04:44.857621   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/config.json: {Name:mka1ac6724c3ce75414158b232e8956807c75e7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:04:44.872732   18113 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 00:04:44.872854   18113 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 00:04:44.872872   18113 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1217 00:04:44.872878   18113 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1217 00:04:44.872887   18113 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1217 00:04:44.872897   18113 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from local cache
	I1217 00:04:56.843697   18113 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from cached tarball
	I1217 00:04:56.843733   18113 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:04:56.843770   18113 start.go:360] acquireMachinesLock for addons-401977: {Name:mk469783e29eb0a81971ed75239211715445c9d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:04:56.843862   18113 start.go:364] duration metric: took 74.89µs to acquireMachinesLock for "addons-401977"
	I1217 00:04:56.843889   18113 start.go:93] Provisioning new machine with config: &{Name:addons-401977 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-401977 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:04:56.843956   18113 start.go:125] createHost starting for "" (driver="docker")
	I1217 00:04:56.845696   18113 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1217 00:04:56.845934   18113 start.go:159] libmachine.API.Create for "addons-401977" (driver="docker")
	I1217 00:04:56.845964   18113 client.go:173] LocalClient.Create starting
	I1217 00:04:56.846096   18113 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem
	I1217 00:04:56.947740   18113 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem
	I1217 00:04:56.986529   18113 cli_runner.go:164] Run: docker network inspect addons-401977 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 00:04:57.004429   18113 cli_runner.go:211] docker network inspect addons-401977 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 00:04:57.004499   18113 network_create.go:284] running [docker network inspect addons-401977] to gather additional debugging logs...
	I1217 00:04:57.004517   18113 cli_runner.go:164] Run: docker network inspect addons-401977
	W1217 00:04:57.019094   18113 cli_runner.go:211] docker network inspect addons-401977 returned with exit code 1
	I1217 00:04:57.019120   18113 network_create.go:287] error running [docker network inspect addons-401977]: docker network inspect addons-401977: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-401977 not found
	I1217 00:04:57.019145   18113 network_create.go:289] output of [docker network inspect addons-401977]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-401977 not found
	
	** /stderr **
	I1217 00:04:57.019248   18113 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:04:57.034880   18113 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ef2180}
	I1217 00:04:57.034912   18113 network_create.go:124] attempt to create docker network addons-401977 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1217 00:04:57.034952   18113 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-401977 addons-401977
	I1217 00:04:57.078427   18113 network_create.go:108] docker network addons-401977 192.168.49.0/24 created
	I1217 00:04:57.078455   18113 kic.go:121] calculated static IP "192.168.49.2" for the "addons-401977" container
	I1217 00:04:57.078544   18113 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 00:04:57.093138   18113 cli_runner.go:164] Run: docker volume create addons-401977 --label name.minikube.sigs.k8s.io=addons-401977 --label created_by.minikube.sigs.k8s.io=true
	I1217 00:04:57.109025   18113 oci.go:103] Successfully created a docker volume addons-401977
	I1217 00:04:57.109119   18113 cli_runner.go:164] Run: docker run --rm --name addons-401977-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-401977 --entrypoint /usr/bin/test -v addons-401977:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 00:05:03.664941   18113 cli_runner.go:217] Completed: docker run --rm --name addons-401977-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-401977 --entrypoint /usr/bin/test -v addons-401977:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (6.555775228s)
	I1217 00:05:03.664975   18113 oci.go:107] Successfully prepared a docker volume addons-401977
	I1217 00:05:03.665060   18113 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:05:03.665073   18113 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 00:05:03.665122   18113 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-401977:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 00:05:07.442476   18113 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-401977:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.777307344s)
	I1217 00:05:07.442509   18113 kic.go:203] duration metric: took 3.777432183s to extract preloaded images to volume ...
	W1217 00:05:07.442605   18113 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 00:05:07.442653   18113 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 00:05:07.442703   18113 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 00:05:07.497828   18113 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-401977 --name addons-401977 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-401977 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-401977 --network addons-401977 --ip 192.168.49.2 --volume addons-401977:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 00:05:07.776337   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Running}}
	I1217 00:05:07.794052   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:07.811511   18113 cli_runner.go:164] Run: docker exec addons-401977 stat /var/lib/dpkg/alternatives/iptables
	I1217 00:05:07.855511   18113 oci.go:144] the created container "addons-401977" has a running status.
	I1217 00:05:07.855543   18113 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa...
	I1217 00:05:07.980091   18113 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 00:05:08.007049   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:08.027612   18113 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 00:05:08.027640   18113 kic_runner.go:114] Args: [docker exec --privileged addons-401977 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 00:05:08.075193   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:08.098548   18113 machine.go:94] provisionDockerMachine start ...
	I1217 00:05:08.098656   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:08.117838   18113 main.go:143] libmachine: Using SSH client type: native
	I1217 00:05:08.118184   18113 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1217 00:05:08.118205   18113 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:05:08.248592   18113 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-401977
	
	I1217 00:05:08.248623   18113 ubuntu.go:182] provisioning hostname "addons-401977"
	I1217 00:05:08.248680   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:08.265867   18113 main.go:143] libmachine: Using SSH client type: native
	I1217 00:05:08.266160   18113 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1217 00:05:08.266179   18113 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-401977 && echo "addons-401977" | sudo tee /etc/hostname
	I1217 00:05:08.400856   18113 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-401977
	
	I1217 00:05:08.400920   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:08.420240   18113 main.go:143] libmachine: Using SSH client type: native
	I1217 00:05:08.420542   18113 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1217 00:05:08.420571   18113 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-401977' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-401977/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-401977' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:05:08.544155   18113 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:05:08.544178   18113 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:05:08.544199   18113 ubuntu.go:190] setting up certificates
	I1217 00:05:08.544211   18113 provision.go:84] configureAuth start
	I1217 00:05:08.544264   18113 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-401977
	I1217 00:05:08.560543   18113 provision.go:143] copyHostCerts
	I1217 00:05:08.560617   18113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:05:08.560753   18113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:05:08.560843   18113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:05:08.560915   18113 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.addons-401977 san=[127.0.0.1 192.168.49.2 addons-401977 localhost minikube]
	I1217 00:05:08.715974   18113 provision.go:177] copyRemoteCerts
	I1217 00:05:08.716030   18113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:05:08.716083   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:08.734195   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:08.827185   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:05:08.844679   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 00:05:08.861266   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 00:05:08.877459   18113 provision.go:87] duration metric: took 333.229549ms to configureAuth
	I1217 00:05:08.877481   18113 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:05:08.877634   18113 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:05:08.877718   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:08.895491   18113 main.go:143] libmachine: Using SSH client type: native
	I1217 00:05:08.895708   18113 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1217 00:05:08.895725   18113 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:05:09.151437   18113 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:05:09.151462   18113 machine.go:97] duration metric: took 1.052893209s to provisionDockerMachine
	I1217 00:05:09.151475   18113 client.go:176] duration metric: took 12.305501842s to LocalClient.Create
	I1217 00:05:09.151494   18113 start.go:167] duration metric: took 12.305560138s to libmachine.API.Create "addons-401977"
	I1217 00:05:09.151504   18113 start.go:293] postStartSetup for "addons-401977" (driver="docker")
	I1217 00:05:09.151516   18113 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:05:09.151573   18113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:05:09.151603   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:09.168556   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:09.260136   18113 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:05:09.263403   18113 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:05:09.263423   18113 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:05:09.263432   18113 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:05:09.263492   18113 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:05:09.263518   18113 start.go:296] duration metric: took 112.008672ms for postStartSetup
	I1217 00:05:09.263794   18113 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-401977
	I1217 00:05:09.281751   18113 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/config.json ...
	I1217 00:05:09.282033   18113 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:05:09.282088   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:09.299336   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:09.386590   18113 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:05:09.390890   18113 start.go:128] duration metric: took 12.546921798s to createHost
	I1217 00:05:09.390907   18113 start.go:83] releasing machines lock for "addons-401977", held for 12.54703548s
	I1217 00:05:09.390975   18113 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-401977
	I1217 00:05:09.407214   18113 ssh_runner.go:195] Run: cat /version.json
	I1217 00:05:09.407242   18113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:05:09.407252   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:09.407302   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:09.426384   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:09.426734   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:09.566091   18113 ssh_runner.go:195] Run: systemctl --version
	I1217 00:05:09.572144   18113 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:05:09.603804   18113 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:05:09.608124   18113 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:05:09.608195   18113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:05:09.631576   18113 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 00:05:09.631597   18113 start.go:496] detecting cgroup driver to use...
	I1217 00:05:09.631630   18113 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:05:09.631666   18113 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:05:09.646686   18113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:05:09.658504   18113 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:05:09.658560   18113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:05:09.675140   18113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:05:09.691628   18113 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:05:09.763672   18113 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:05:09.844007   18113 docker.go:234] disabling docker service ...
	I1217 00:05:09.844074   18113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:05:09.860519   18113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:05:09.871833   18113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:05:09.948862   18113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:05:10.025768   18113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:05:10.037108   18113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:05:10.050010   18113 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:05:10.050067   18113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:05:10.059500   18113 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:05:10.059548   18113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:05:10.067651   18113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:05:10.075474   18113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:05:10.083310   18113 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:05:10.090491   18113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:05:10.098154   18113 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:05:10.110589   18113 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:05:10.118304   18113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:05:10.125079   18113 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1217 00:05:10.125127   18113 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1217 00:05:10.136134   18113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:05:10.142818   18113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:05:10.215099   18113 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:05:10.346471   18113 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:05:10.346544   18113 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:05:10.350230   18113 start.go:564] Will wait 60s for crictl version
	I1217 00:05:10.350287   18113 ssh_runner.go:195] Run: which crictl
	I1217 00:05:10.353519   18113 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:05:10.376156   18113 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:05:10.376267   18113 ssh_runner.go:195] Run: crio --version
	I1217 00:05:10.401763   18113 ssh_runner.go:195] Run: crio --version
	I1217 00:05:10.428658   18113 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 00:05:10.429739   18113 cli_runner.go:164] Run: docker network inspect addons-401977 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:05:10.445785   18113 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:05:10.449472   18113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:05:10.458900   18113 kubeadm.go:884] updating cluster {Name:addons-401977 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-401977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:05:10.459042   18113 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:05:10.459103   18113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:05:10.488896   18113 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:05:10.488916   18113 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:05:10.488961   18113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:05:10.512578   18113 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:05:10.512596   18113 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:05:10.512603   18113 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1217 00:05:10.512677   18113 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-401977 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-401977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:05:10.512737   18113 ssh_runner.go:195] Run: crio config
	I1217 00:05:10.555525   18113 cni.go:84] Creating CNI manager for ""
	I1217 00:05:10.555545   18113 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:05:10.555561   18113 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:05:10.555580   18113 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-401977 NodeName:addons-401977 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:05:10.555682   18113 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-401977"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:05:10.555739   18113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 00:05:10.563646   18113 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:05:10.563710   18113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:05:10.571116   18113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 00:05:10.582523   18113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 00:05:10.596418   18113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1217 00:05:10.607861   18113 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:05:10.611032   18113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:05:10.619690   18113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:05:10.699166   18113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:05:10.723144   18113 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977 for IP: 192.168.49.2
	I1217 00:05:10.723254   18113 certs.go:195] generating shared ca certs ...
	I1217 00:05:10.723285   18113 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:10.723419   18113 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:05:10.889308   18113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt ...
	I1217 00:05:10.889337   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt: {Name:mkad87bcfe71f8fef4f7432aa85f6a4d2072ed3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:10.889497   18113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key ...
	I1217 00:05:10.889510   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key: {Name:mk52d01385ec5a003e642beb7bc53ba5d5e7dff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:10.889612   18113 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:05:11.108126   18113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt ...
	I1217 00:05:11.108152   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt: {Name:mkfbfa16ae86e4ac20e66123d6e5c2357f8d504f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:11.108302   18113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key ...
	I1217 00:05:11.108313   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key: {Name:mk285953dc4c55f55c7256d71c31a2f9f336c4e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:11.108379   18113 certs.go:257] generating profile certs ...
	I1217 00:05:11.108431   18113 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.key
	I1217 00:05:11.108445   18113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt with IP's: []
	I1217 00:05:11.277198   18113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt ...
	I1217 00:05:11.277224   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: {Name:mkd11c3a01e684631f2f40bb6ba4f4d6517cdc7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:11.277375   18113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.key ...
	I1217 00:05:11.277386   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.key: {Name:mk057f36ae4609090c04147c0dc0e7f184016f49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:11.277455   18113 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.key.02537cb5
	I1217 00:05:11.277473   18113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.crt.02537cb5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1217 00:05:11.518968   18113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.crt.02537cb5 ...
	I1217 00:05:11.519005   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.crt.02537cb5: {Name:mk5a6e35f1890295558c040c004f2f7d78d1bed4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:11.519155   18113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.key.02537cb5 ...
	I1217 00:05:11.519169   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.key.02537cb5: {Name:mk234d90ae11d8a3b4b3e4083e99530de84ea660 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:11.519241   18113 certs.go:382] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.crt.02537cb5 -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.crt
	I1217 00:05:11.519320   18113 certs.go:386] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.key.02537cb5 -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.key
	I1217 00:05:11.519397   18113 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/proxy-client.key
	I1217 00:05:11.519425   18113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/proxy-client.crt with IP's: []
	I1217 00:05:11.630780   18113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/proxy-client.crt ...
	I1217 00:05:11.630811   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/proxy-client.crt: {Name:mk2aa7b0d83f33ae26f405c809be02f906021b76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:11.630969   18113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/proxy-client.key ...
	I1217 00:05:11.631001   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/proxy-client.key: {Name:mkbc54d3bf0f6d79316494e4c7184a7ab041fbb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:11.631171   18113 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:05:11.631209   18113 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:05:11.631235   18113 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:05:11.631273   18113 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:05:11.631894   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:05:11.649357   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:05:11.666174   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:05:11.683394   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:05:11.699621   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 00:05:11.715356   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:05:11.732370   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:05:11.748700   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:05:11.764999   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:05:11.782575   18113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:05:11.794563   18113 ssh_runner.go:195] Run: openssl version
	I1217 00:05:11.800464   18113 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:05:11.807039   18113 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:05:11.816084   18113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:05:11.819831   18113 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:05:11.819890   18113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:05:11.853012   18113 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:05:11.860609   18113 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 00:05:11.867286   18113 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:05:11.870403   18113 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 00:05:11.870442   18113 kubeadm.go:401] StartCluster: {Name:addons-401977 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-401977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:05:11.870498   18113 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:05:11.870530   18113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:05:11.895249   18113 cri.go:89] found id: ""
	I1217 00:05:11.895305   18113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:05:11.903188   18113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:05:11.910421   18113 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:05:11.910463   18113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:05:11.917442   18113 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:05:11.917456   18113 kubeadm.go:158] found existing configuration files:
	
	I1217 00:05:11.917485   18113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 00:05:11.924460   18113 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:05:11.924511   18113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:05:11.931174   18113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 00:05:11.938190   18113 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:05:11.938225   18113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:05:11.944705   18113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 00:05:11.951151   18113 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:05:11.951195   18113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:05:11.957894   18113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 00:05:11.964633   18113 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:05:11.964697   18113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:05:11.971053   18113 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:05:12.023229   18113 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 00:05:12.076724   18113 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:05:20.712347   18113 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1217 00:05:20.712420   18113 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:05:20.712502   18113 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:05:20.712600   18113 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 00:05:20.712654   18113 kubeadm.go:319] OS: Linux
	I1217 00:05:20.712696   18113 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:05:20.712757   18113 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:05:20.712824   18113 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:05:20.712870   18113 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:05:20.712931   18113 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:05:20.713022   18113 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:05:20.713077   18113 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:05:20.713117   18113 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 00:05:20.713179   18113 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:05:20.713265   18113 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:05:20.713353   18113 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:05:20.713407   18113 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:05:20.714893   18113 out.go:252]   - Generating certificates and keys ...
	I1217 00:05:20.714964   18113 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:05:20.715055   18113 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:05:20.715115   18113 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 00:05:20.715184   18113 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 00:05:20.715240   18113 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 00:05:20.715287   18113 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 00:05:20.715346   18113 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 00:05:20.715453   18113 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-401977 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 00:05:20.715499   18113 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 00:05:20.715615   18113 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-401977 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 00:05:20.715672   18113 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 00:05:20.715726   18113 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 00:05:20.715769   18113 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 00:05:20.715839   18113 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:05:20.715885   18113 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:05:20.715971   18113 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:05:20.716066   18113 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:05:20.716217   18113 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:05:20.716290   18113 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:05:20.716418   18113 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:05:20.716487   18113 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:05:20.717697   18113 out.go:252]   - Booting up control plane ...
	I1217 00:05:20.717793   18113 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:05:20.717864   18113 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:05:20.717922   18113 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:05:20.718059   18113 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:05:20.718182   18113 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:05:20.718290   18113 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:05:20.718395   18113 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:05:20.718468   18113 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:05:20.718662   18113 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:05:20.718808   18113 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:05:20.718866   18113 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.709975ms
	I1217 00:05:20.718985   18113 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 00:05:20.719104   18113 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1217 00:05:20.719218   18113 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 00:05:20.719296   18113 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 00:05:20.719412   18113 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.506189451s
	I1217 00:05:20.719504   18113 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.441238256s
	I1217 00:05:20.719598   18113 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501420628s
	I1217 00:05:20.719731   18113 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 00:05:20.719849   18113 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 00:05:20.719900   18113 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 00:05:20.720116   18113 kubeadm.go:319] [mark-control-plane] Marking the node addons-401977 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 00:05:20.720194   18113 kubeadm.go:319] [bootstrap-token] Using token: vnha8o.m9mo8pfeym1waa3p
	I1217 00:05:20.721480   18113 out.go:252]   - Configuring RBAC rules ...
	I1217 00:05:20.721585   18113 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 00:05:20.721678   18113 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 00:05:20.721851   18113 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 00:05:20.722033   18113 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 00:05:20.722203   18113 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 00:05:20.722330   18113 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 00:05:20.722433   18113 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 00:05:20.722486   18113 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 00:05:20.722547   18113 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 00:05:20.722560   18113 kubeadm.go:319] 
	I1217 00:05:20.722644   18113 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 00:05:20.722659   18113 kubeadm.go:319] 
	I1217 00:05:20.722773   18113 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 00:05:20.722784   18113 kubeadm.go:319] 
	I1217 00:05:20.722819   18113 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 00:05:20.722921   18113 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 00:05:20.723014   18113 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 00:05:20.723024   18113 kubeadm.go:319] 
	I1217 00:05:20.723101   18113 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 00:05:20.723110   18113 kubeadm.go:319] 
	I1217 00:05:20.723176   18113 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 00:05:20.723185   18113 kubeadm.go:319] 
	I1217 00:05:20.723262   18113 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 00:05:20.723382   18113 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 00:05:20.723446   18113 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 00:05:20.723452   18113 kubeadm.go:319] 
	I1217 00:05:20.723520   18113 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 00:05:20.723586   18113 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 00:05:20.723591   18113 kubeadm.go:319] 
	I1217 00:05:20.723672   18113 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token vnha8o.m9mo8pfeym1waa3p \
	I1217 00:05:20.723763   18113 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a7c34974519aee4953e03245da076d7a2eba06e40135880a85806e2dab303fa1 \
	I1217 00:05:20.723782   18113 kubeadm.go:319] 	--control-plane 
	I1217 00:05:20.723787   18113 kubeadm.go:319] 
	I1217 00:05:20.723900   18113 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 00:05:20.723910   18113 kubeadm.go:319] 
	I1217 00:05:20.724068   18113 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token vnha8o.m9mo8pfeym1waa3p \
	I1217 00:05:20.724207   18113 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a7c34974519aee4953e03245da076d7a2eba06e40135880a85806e2dab303fa1 
	I1217 00:05:20.724222   18113 cni.go:84] Creating CNI manager for ""
	I1217 00:05:20.724230   18113 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:05:20.725530   18113 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 00:05:20.726608   18113 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 00:05:20.730767   18113 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1217 00:05:20.730782   18113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1217 00:05:20.743299   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 00:05:20.936870   18113 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 00:05:20.936968   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:05:20.936982   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-401977 minikube.k8s.io/updated_at=2025_12_17T00_05_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1 minikube.k8s.io/name=addons-401977 minikube.k8s.io/primary=true
	I1217 00:05:21.010781   18113 ops.go:34] apiserver oom_adj: -16
	I1217 00:05:21.010829   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:05:21.511034   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:05:22.011485   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:05:22.510981   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:05:23.011226   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:05:23.511238   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:05:24.011136   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:05:24.510903   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:05:25.010982   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:05:25.511879   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:05:25.573514   18113 kubeadm.go:1114] duration metric: took 4.63660723s to wait for elevateKubeSystemPrivileges
	I1217 00:05:25.573554   18113 kubeadm.go:403] duration metric: took 13.703113211s to StartCluster
	I1217 00:05:25.573590   18113 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:25.573709   18113 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:05:25.574223   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:25.574416   18113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 00:05:25.574442   18113 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:05:25.574505   18113 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1217 00:05:25.574633   18113 addons.go:70] Setting inspektor-gadget=true in profile "addons-401977"
	I1217 00:05:25.574645   18113 addons.go:70] Setting metrics-server=true in profile "addons-401977"
	I1217 00:05:25.574660   18113 addons.go:239] Setting addon inspektor-gadget=true in "addons-401977"
	I1217 00:05:25.574668   18113 addons.go:239] Setting addon metrics-server=true in "addons-401977"
	I1217 00:05:25.574703   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.574712   18113 addons.go:70] Setting gcp-auth=true in profile "addons-401977"
	I1217 00:05:25.574705   18113 addons.go:70] Setting default-storageclass=true in profile "addons-401977"
	I1217 00:05:25.574733   18113 mustload.go:66] Loading cluster: addons-401977
	I1217 00:05:25.574748   18113 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-401977"
	I1217 00:05:25.574809   18113 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-401977"
	I1217 00:05:25.574848   18113 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-401977"
	I1217 00:05:25.574828   18113 addons.go:70] Setting cloud-spanner=true in profile "addons-401977"
	I1217 00:05:25.574884   18113 addons.go:239] Setting addon cloud-spanner=true in "addons-401977"
	I1217 00:05:25.574885   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.574918   18113 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:05:25.574900   18113 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-401977"
	I1217 00:05:25.574950   18113 addons.go:70] Setting storage-provisioner=true in profile "addons-401977"
	I1217 00:05:25.574971   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.574981   18113 addons.go:239] Setting addon storage-provisioner=true in "addons-401977"
	I1217 00:05:25.575023   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.575032   18113 addons.go:70] Setting registry=true in profile "addons-401977"
	I1217 00:05:25.575778   18113 addons.go:70] Setting ingress-dns=true in profile "addons-401977"
	I1217 00:05:25.575805   18113 addons.go:239] Setting addon ingress-dns=true in "addons-401977"
	I1217 00:05:25.575842   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.575708   18113 addons.go:70] Setting ingress=true in profile "addons-401977"
	I1217 00:05:25.576023   18113 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-401977"
	I1217 00:05:25.576045   18113 addons.go:239] Setting addon ingress=true in "addons-401977"
	I1217 00:05:25.576073   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.576089   18113 addons.go:70] Setting registry-creds=true in profile "addons-401977"
	I1217 00:05:25.576106   18113 addons.go:239] Setting addon registry-creds=true in "addons-401977"
	I1217 00:05:25.576121   18113 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-401977"
	I1217 00:05:25.576130   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.576136   18113 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-401977"
	I1217 00:05:25.576261   18113 addons.go:70] Setting volcano=true in profile "addons-401977"
	I1217 00:05:25.576274   18113 addons.go:239] Setting addon volcano=true in "addons-401977"
	I1217 00:05:25.576295   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.576314   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.576450   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.576588   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.575787   18113 addons.go:239] Setting addon registry=true in "addons-401977"
	I1217 00:05:25.576727   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.576949   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.577208   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.577507   18113 out.go:179] * Verifying Kubernetes components...
	I1217 00:05:25.574705   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.577530   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.578044   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.578294   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.576045   18113 addons.go:70] Setting volumesnapshots=true in profile "addons-401977"
	I1217 00:05:25.578489   18113 addons.go:239] Setting addon volumesnapshots=true in "addons-401977"
	I1217 00:05:25.578517   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.578784   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.578794   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.574705   18113 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:05:25.577728   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.580583   18113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:05:25.581378   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.575762   18113 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-401977"
	I1217 00:05:25.582363   18113 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-401977"
	I1217 00:05:25.582394   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.574633   18113 addons.go:70] Setting yakd=true in profile "addons-401977"
	I1217 00:05:25.582436   18113 addons.go:239] Setting addon yakd=true in "addons-401977"
	I1217 00:05:25.582455   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.583340   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.583352   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.587371   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.577746   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.583344   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.589284   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.623930   18113 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-401977"
	I1217 00:05:25.623984   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.624520   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.628511   18113 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1217 00:05:25.629863   18113 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 00:05:25.629886   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1217 00:05:25.629951   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.635829   18113 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1217 00:05:25.637951   18113 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1217 00:05:25.638875   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1217 00:05:25.638982   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	W1217 00:05:25.653461   18113 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1217 00:05:25.658946   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.658968   18113 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1217 00:05:25.661648   18113 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 00:05:25.661667   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1217 00:05:25.661735   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.661916   18113 addons.go:239] Setting addon default-storageclass=true in "addons-401977"
	I1217 00:05:25.661955   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.662395   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.677044   18113 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1217 00:05:25.679284   18113 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1217 00:05:25.679756   18113 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1217 00:05:25.679836   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.685075   18113 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1217 00:05:25.689845   18113 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:05:25.691032   18113 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1217 00:05:25.691063   18113 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1217 00:05:25.691123   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.691414   18113 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:05:25.691432   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:05:25.691486   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.693847   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.697650   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.700417   18113 out.go:179]   - Using image docker.io/registry:3.0.0
	I1217 00:05:25.701912   18113 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1217 00:05:25.701901   18113 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1217 00:05:25.702141   18113 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1217 00:05:25.703215   18113 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1217 00:05:25.703233   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1217 00:05:25.703305   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.704124   18113 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1217 00:05:25.704151   18113 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1217 00:05:25.704228   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.707175   18113 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1217 00:05:25.708434   18113 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1217 00:05:25.709520   18113 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1217 00:05:25.710856   18113 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1217 00:05:25.712124   18113 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1217 00:05:25.713283   18113 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1217 00:05:25.714738   18113 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 00:05:25.714800   18113 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1217 00:05:25.716223   18113 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1217 00:05:25.716229   18113 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1217 00:05:25.717745   18113 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1217 00:05:25.717827   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.725738   18113 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 00:05:25.725821   18113 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1217 00:05:25.725912   18113 out.go:179]   - Using image docker.io/busybox:stable
	I1217 00:05:25.725982   18113 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1217 00:05:25.727335   18113 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 00:05:25.727351   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1217 00:05:25.727409   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.727908   18113 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1217 00:05:25.727913   18113 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1217 00:05:25.728284   18113 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 00:05:25.728301   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1217 00:05:25.728348   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.728491   18113 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1217 00:05:25.728501   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1217 00:05:25.729054   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.729468   18113 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 00:05:25.729484   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1217 00:05:25.729540   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.730161   18113 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 00:05:25.730469   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1217 00:05:25.730660   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.732081   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.737079   18113 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:05:25.737099   18113 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:05:25.737146   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.757628   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.760718   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.776255   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.778157   18113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 00:05:25.795856   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.796759   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.807222   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.808319   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.817846   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.818278   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.821110   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.825929   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.830723   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.834516   18113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1217 00:05:25.835046   18113 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1217 00:05:25.835101   18113 retry.go:31] will retry after 129.374855ms: ssh: handshake failed: EOF
	I1217 00:05:25.925653   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 00:05:25.935321   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1217 00:05:25.946426   18113 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1217 00:05:25.946451   18113 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1217 00:05:25.980051   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 00:05:25.988313   18113 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1217 00:05:25.988337   18113 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1217 00:05:25.994586   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:05:26.001723   18113 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1217 00:05:26.001743   18113 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1217 00:05:26.014447   18113 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1217 00:05:26.014469   18113 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1217 00:05:26.015981   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 00:05:26.016680   18113 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1217 00:05:26.016826   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1217 00:05:26.019003   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1217 00:05:26.022518   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 00:05:26.028131   18113 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1217 00:05:26.028149   18113 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1217 00:05:26.035087   18113 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1217 00:05:26.035113   18113 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1217 00:05:26.044281   18113 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1217 00:05:26.044314   18113 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1217 00:05:26.045536   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 00:05:26.052433   18113 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1217 00:05:26.052544   18113 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1217 00:05:26.052687   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:05:26.063544   18113 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1217 00:05:26.063624   18113 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1217 00:05:26.082433   18113 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1217 00:05:26.082453   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1217 00:05:26.088579   18113 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1217 00:05:26.088607   18113 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1217 00:05:26.100929   18113 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1217 00:05:26.101054   18113 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1217 00:05:26.102001   18113 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 00:05:26.102016   18113 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1217 00:05:26.123082   18113 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1217 00:05:26.123107   18113 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1217 00:05:26.133573   18113 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 00:05:26.133599   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1217 00:05:26.139443   18113 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1217 00:05:26.139467   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1217 00:05:26.142584   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1217 00:05:26.157472   18113 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1217 00:05:26.157498   18113 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1217 00:05:26.164411   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 00:05:26.167919   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 00:05:26.179656   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 00:05:26.183124   18113 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1217 00:05:26.185028   18113 node_ready.go:35] waiting up to 6m0s for node "addons-401977" to be "Ready" ...
	I1217 00:05:26.186207   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1217 00:05:26.294149   18113 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1217 00:05:26.294180   18113 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1217 00:05:26.372091   18113 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1217 00:05:26.372120   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1217 00:05:26.435844   18113 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1217 00:05:26.435873   18113 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1217 00:05:26.487157   18113 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1217 00:05:26.487189   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1217 00:05:26.537871   18113 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1217 00:05:26.537894   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1217 00:05:26.585689   18113 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 00:05:26.585722   18113 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1217 00:05:26.621601   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 00:05:26.694507   18113 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-401977" context rescaled to 1 replicas
	I1217 00:05:26.943950   18113 addons.go:495] Verifying addon registry=true in "addons-401977"
	I1217 00:05:26.945762   18113 out.go:179] * Verifying registry addon...
	I1217 00:05:26.948713   18113 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1217 00:05:26.951300   18113 addons.go:495] Verifying addon metrics-server=true in "addons-401977"
	I1217 00:05:26.954003   18113 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 00:05:26.954024   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1217 00:05:26.956545   18113 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1217 00:05:27.152488   18113 addons.go:495] Verifying addon ingress=true in "addons-401977"
	I1217 00:05:27.153781   18113 out.go:179] * Verifying ingress addon...
	I1217 00:05:27.155679   18113 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1217 00:05:27.158087   18113 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1217 00:05:27.158108   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:27.452468   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:27.515548   18113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.335842079s)
	W1217 00:05:27.515615   18113 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 00:05:27.515632   18113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.329384673s)
	I1217 00:05:27.515642   18113 retry.go:31] will retry after 150.023973ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 00:05:27.515897   18113 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-401977"
	I1217 00:05:27.519903   18113 out.go:179] * Verifying csi-hostpath-driver addon...
	I1217 00:05:27.519906   18113 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-401977 service yakd-dashboard -n yakd-dashboard
	
	I1217 00:05:27.522362   18113 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1217 00:05:27.524462   18113 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 00:05:27.524475   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:27.659849   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:27.665953   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 00:05:27.952961   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:28.025144   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:28.160033   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:28.187983   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:28.452118   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:28.552269   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:28.659623   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:28.951375   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:29.025924   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:29.159021   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:29.451736   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:29.552781   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:29.658286   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:29.952045   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:30.024964   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:30.091841   18113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.42584844s)
	I1217 00:05:30.158763   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:30.452109   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:30.553183   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:30.659306   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:30.687684   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:30.951860   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:31.025160   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:31.159257   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:31.452229   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:31.552724   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:31.658268   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:31.952502   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:32.024700   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:32.158826   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:32.452123   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:32.553345   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:32.658855   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:32.951974   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:33.025605   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:33.158765   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:33.187880   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:33.268060   18113 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1217 00:05:33.268126   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:33.285232   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:33.389351   18113 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1217 00:05:33.401815   18113 addons.go:239] Setting addon gcp-auth=true in "addons-401977"
	I1217 00:05:33.401867   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:33.402272   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:33.420007   18113 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1217 00:05:33.420075   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:33.436535   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:33.451782   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:33.525584   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:33.527807   18113 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1217 00:05:33.529176   18113 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 00:05:33.530497   18113 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1217 00:05:33.530511   18113 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1217 00:05:33.543144   18113 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1217 00:05:33.543163   18113 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1217 00:05:33.555413   18113 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 00:05:33.555433   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1217 00:05:33.567673   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 00:05:33.658738   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:33.858630   18113 addons.go:495] Verifying addon gcp-auth=true in "addons-401977"
	I1217 00:05:33.859909   18113 out.go:179] * Verifying gcp-auth addon...
	I1217 00:05:33.861843   18113 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1217 00:05:33.863983   18113 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1217 00:05:33.864005   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:33.951250   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:34.025979   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:34.158506   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:34.365311   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:34.452240   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:34.524986   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:34.659002   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:34.864878   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:34.951147   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:35.025329   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:35.158897   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:35.188214   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:35.364774   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:35.451184   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:35.525836   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:35.658673   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:35.864668   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:35.952258   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:36.025610   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:36.159237   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:36.365122   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:36.451599   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:36.525453   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:36.659062   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:36.864845   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:36.951144   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:37.025550   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:37.158933   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:37.364632   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:37.451943   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:37.525660   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:37.658056   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:37.688413   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:37.864801   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:37.951837   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:38.025245   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:38.159076   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:38.364524   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:38.451868   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:38.525420   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:38.658931   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:38.864839   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:38.952236   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:39.025667   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:39.158104   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:39.364902   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:39.451212   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:39.525608   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:39.659288   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:39.865413   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:39.951789   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:40.025318   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:40.158816   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:40.188024   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:40.364468   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:40.451671   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:40.525058   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:40.659053   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:40.864965   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:40.951385   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:41.025604   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:41.159144   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:41.364884   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:41.451196   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:41.525543   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:41.659359   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:41.865203   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:41.951248   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:42.025636   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:42.159193   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:42.188114   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:42.364532   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:42.451702   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:42.525059   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:42.659395   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:42.865463   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:42.966251   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:43.025528   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:43.159177   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:43.364672   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:43.451945   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:43.525340   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:43.658930   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:43.864927   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:43.951171   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:44.025375   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:44.158752   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:44.364348   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:44.451788   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:44.525109   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:44.658703   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:44.688203   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:44.864803   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:44.951036   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:45.025275   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:45.158722   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:45.365151   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:45.451429   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:45.524701   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:45.658520   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:45.864156   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:45.951609   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:46.025062   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:46.158569   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:46.364976   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:46.451384   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:46.525788   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:46.658712   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:46.864240   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:46.951682   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:47.025059   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:47.158583   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:47.187723   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:47.365652   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:47.451814   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:47.525118   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:47.658732   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:47.864062   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:47.951284   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:48.025628   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:48.158078   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:48.364544   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:48.452067   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:48.525554   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:48.659206   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:48.864937   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:48.951233   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:49.025567   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:49.159094   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:49.365620   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:49.451941   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:49.525320   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:49.658961   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:49.688289   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:49.864873   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:49.951228   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:50.025387   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:50.159077   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:50.364859   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:50.451214   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:50.525877   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:50.658326   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:50.865527   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:50.966060   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:51.025014   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:51.158592   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:51.365044   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:51.451252   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:51.525669   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:51.659365   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:51.865351   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:51.951569   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:52.024643   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:52.158453   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:52.187719   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:52.365307   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:52.451624   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:52.524974   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:52.658728   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:52.864405   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:52.951700   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:53.024696   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:53.158083   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:53.364850   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:53.451023   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:53.525848   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:53.658509   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:53.865102   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:53.951386   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:54.025584   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:54.159099   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:54.188094   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:54.364590   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:54.451934   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:54.525230   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:54.659286   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:54.864767   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:54.952083   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:55.025335   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:55.159063   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:55.364813   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:55.451087   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:55.525391   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:55.659261   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:55.865117   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:55.951290   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:56.025569   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:56.158958   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:56.364699   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:56.452050   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:56.525310   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:56.658963   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:56.688228   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:56.864604   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:56.951966   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:57.025502   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:57.158905   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:57.364637   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:57.451668   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:57.525181   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:57.658947   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:57.864790   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:57.950738   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:58.024781   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:58.158108   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:58.364692   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:58.451804   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:58.525204   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:58.659281   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:58.865262   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:58.951580   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:59.024758   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:59.158233   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:59.186839   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:59.364601   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:59.451841   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:59.525068   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:59.658781   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:59.864650   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:59.951898   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:00.025372   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:00.158798   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:00.365084   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:00.451244   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:00.525474   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:00.659107   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:00.864781   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:00.951238   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:01.026809   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:01.158267   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:06:01.187312   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:06:01.364599   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:01.452034   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:01.525472   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:01.659337   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:01.864191   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:01.951577   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:02.024630   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:02.159297   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:02.364763   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:02.452028   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:02.525535   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:02.659213   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:02.865095   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:02.951392   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:03.025803   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:03.158055   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:06:03.187969   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:06:03.365032   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:03.451113   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:03.525226   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:03.659139   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:03.864776   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:03.950975   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:04.025228   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:04.158575   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:04.364881   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:04.451219   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:04.525529   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:04.659143   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:04.864911   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:04.951160   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:05.025388   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:05.158882   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:05.364155   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:05.451385   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:05.524889   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:05.658492   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:06:05.687934   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:06:05.864329   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:05.951625   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:06.024944   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:06.158487   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:06.364761   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:06.450877   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:06.525349   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:06.658837   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:06.877712   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:06.951596   18113 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 00:06:06.951616   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:07.024920   18113 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 00:06:07.024946   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:07.158597   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:07.187898   18113 node_ready.go:49] node "addons-401977" is "Ready"
	I1217 00:06:07.187931   18113 node_ready.go:38] duration metric: took 41.002874319s for node "addons-401977" to be "Ready" ...
	I1217 00:06:07.187948   18113 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:06:07.188019   18113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:06:07.208592   18113 api_server.go:72] duration metric: took 41.634108933s to wait for apiserver process to appear ...
	I1217 00:06:07.208629   18113 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:06:07.208654   18113 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 00:06:07.213828   18113 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1217 00:06:07.214882   18113 api_server.go:141] control plane version: v1.34.2
	I1217 00:06:07.214914   18113 api_server.go:131] duration metric: took 6.277134ms to wait for apiserver health ...
	I1217 00:06:07.214926   18113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:06:07.261289   18113 system_pods.go:59] 20 kube-system pods found
	I1217 00:06:07.261332   18113 system_pods.go:61] "amd-gpu-device-plugin-zhxtw" [39b7820e-9767-4f89-a35e-e8e970dc8ced] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 00:06:07.261342   18113 system_pods.go:61] "coredns-66bc5c9577-pqbbw" [932eceaf-63fa-4947-b6bd-9022183fe57b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:06:07.261353   18113 system_pods.go:61] "csi-hostpath-attacher-0" [cc167fc5-9598-4c16-9567-00a80fc242c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 00:06:07.261361   18113 system_pods.go:61] "csi-hostpath-resizer-0" [f28b0d1f-8e42-4c55-8691-07d3af4af925] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 00:06:07.261369   18113 system_pods.go:61] "csi-hostpathplugin-bc4sr" [1f387290-1028-4a87-8a5d-26cb403754c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 00:06:07.261375   18113 system_pods.go:61] "etcd-addons-401977" [f49a181f-e850-428b-8412-a15b29a0fafb] Running
	I1217 00:06:07.261384   18113 system_pods.go:61] "kindnet-h5jgb" [6db99c0c-f95c-4610-abb5-b9dbcc985fd7] Running
	I1217 00:06:07.261390   18113 system_pods.go:61] "kube-apiserver-addons-401977" [2c604fe0-534d-4aee-b254-45f298b455f1] Running
	I1217 00:06:07.261396   18113 system_pods.go:61] "kube-controller-manager-addons-401977" [d5432463-cc2b-4d3a-9268-c0fbfdd5272f] Running
	I1217 00:06:07.261404   18113 system_pods.go:61] "kube-ingress-dns-minikube" [fd9e50d9-c944-4528-9420-199a55f88ca6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 00:06:07.261412   18113 system_pods.go:61] "kube-proxy-rgd8j" [7054d552-b932-49a5-83ba-68fd7943c0c4] Running
	I1217 00:06:07.261419   18113 system_pods.go:61] "kube-scheduler-addons-401977" [91fdd9b7-07ba-4338-a158-f5edfdcac7ac] Running
	I1217 00:06:07.261427   18113 system_pods.go:61] "metrics-server-85b7d694d7-krz87" [e7e57a4b-dfdd-48e7-93e6-72b817b73907] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 00:06:07.261436   18113 system_pods.go:61] "nvidia-device-plugin-daemonset-xk8ql" [6f8c2cc8-3d77-495a-902d-fc67c36cde4d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 00:06:07.261446   18113 system_pods.go:61] "registry-6b586f9694-z62qp" [bddc8662-c9eb-4392-837b-010328dd2e70] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 00:06:07.261454   18113 system_pods.go:61] "registry-creds-764b6fb674-5ddkb" [99c64f75-ab3b-49e5-b5d9-f425e95c71c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 00:06:07.261465   18113 system_pods.go:61] "registry-proxy-58fhj" [9b3ecc60-f54f-46fd-8a40-a56e4574bb5b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 00:06:07.261476   18113 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bs9sb" [1002c903-bd7c-4827-8b43-4bb428bbab2b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:06:07.261485   18113 system_pods.go:61] "snapshot-controller-7d9fbc56b8-tqvm8" [35e50a35-6f1d-423a-8a7a-c09636dfbfdb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:06:07.261492   18113 system_pods.go:61] "storage-provisioner" [a2ddca2b-2eea-4f4c-b89d-c0d6966b5fb1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:06:07.261499   18113 system_pods.go:74] duration metric: took 46.565829ms to wait for pod list to return data ...
	I1217 00:06:07.261511   18113 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:06:07.264154   18113 default_sa.go:45] found service account: "default"
	I1217 00:06:07.264181   18113 default_sa.go:55] duration metric: took 2.663055ms for default service account to be created ...
	I1217 00:06:07.264194   18113 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 00:06:07.361198   18113 system_pods.go:86] 20 kube-system pods found
	I1217 00:06:07.361227   18113 system_pods.go:89] "amd-gpu-device-plugin-zhxtw" [39b7820e-9767-4f89-a35e-e8e970dc8ced] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 00:06:07.361234   18113 system_pods.go:89] "coredns-66bc5c9577-pqbbw" [932eceaf-63fa-4947-b6bd-9022183fe57b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:06:07.361241   18113 system_pods.go:89] "csi-hostpath-attacher-0" [cc167fc5-9598-4c16-9567-00a80fc242c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 00:06:07.361246   18113 system_pods.go:89] "csi-hostpath-resizer-0" [f28b0d1f-8e42-4c55-8691-07d3af4af925] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 00:06:07.361252   18113 system_pods.go:89] "csi-hostpathplugin-bc4sr" [1f387290-1028-4a87-8a5d-26cb403754c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 00:06:07.361255   18113 system_pods.go:89] "etcd-addons-401977" [f49a181f-e850-428b-8412-a15b29a0fafb] Running
	I1217 00:06:07.361260   18113 system_pods.go:89] "kindnet-h5jgb" [6db99c0c-f95c-4610-abb5-b9dbcc985fd7] Running
	I1217 00:06:07.361264   18113 system_pods.go:89] "kube-apiserver-addons-401977" [2c604fe0-534d-4aee-b254-45f298b455f1] Running
	I1217 00:06:07.361267   18113 system_pods.go:89] "kube-controller-manager-addons-401977" [d5432463-cc2b-4d3a-9268-c0fbfdd5272f] Running
	I1217 00:06:07.361273   18113 system_pods.go:89] "kube-ingress-dns-minikube" [fd9e50d9-c944-4528-9420-199a55f88ca6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 00:06:07.361276   18113 system_pods.go:89] "kube-proxy-rgd8j" [7054d552-b932-49a5-83ba-68fd7943c0c4] Running
	I1217 00:06:07.361280   18113 system_pods.go:89] "kube-scheduler-addons-401977" [91fdd9b7-07ba-4338-a158-f5edfdcac7ac] Running
	I1217 00:06:07.361288   18113 system_pods.go:89] "metrics-server-85b7d694d7-krz87" [e7e57a4b-dfdd-48e7-93e6-72b817b73907] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 00:06:07.361293   18113 system_pods.go:89] "nvidia-device-plugin-daemonset-xk8ql" [6f8c2cc8-3d77-495a-902d-fc67c36cde4d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 00:06:07.361303   18113 system_pods.go:89] "registry-6b586f9694-z62qp" [bddc8662-c9eb-4392-837b-010328dd2e70] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 00:06:07.361308   18113 system_pods.go:89] "registry-creds-764b6fb674-5ddkb" [99c64f75-ab3b-49e5-b5d9-f425e95c71c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 00:06:07.361313   18113 system_pods.go:89] "registry-proxy-58fhj" [9b3ecc60-f54f-46fd-8a40-a56e4574bb5b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 00:06:07.361319   18113 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bs9sb" [1002c903-bd7c-4827-8b43-4bb428bbab2b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:06:07.361324   18113 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tqvm8" [35e50a35-6f1d-423a-8a7a-c09636dfbfdb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:06:07.361329   18113 system_pods.go:89] "storage-provisioner" [a2ddca2b-2eea-4f4c-b89d-c0d6966b5fb1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:06:07.361342   18113 retry.go:31] will retry after 241.230941ms: missing components: kube-dns
	I1217 00:06:07.364485   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:07.460584   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:07.526092   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:07.607930   18113 system_pods.go:86] 20 kube-system pods found
	I1217 00:06:07.607967   18113 system_pods.go:89] "amd-gpu-device-plugin-zhxtw" [39b7820e-9767-4f89-a35e-e8e970dc8ced] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 00:06:07.607980   18113 system_pods.go:89] "coredns-66bc5c9577-pqbbw" [932eceaf-63fa-4947-b6bd-9022183fe57b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:06:07.608013   18113 system_pods.go:89] "csi-hostpath-attacher-0" [cc167fc5-9598-4c16-9567-00a80fc242c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 00:06:07.608022   18113 system_pods.go:89] "csi-hostpath-resizer-0" [f28b0d1f-8e42-4c55-8691-07d3af4af925] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 00:06:07.608032   18113 system_pods.go:89] "csi-hostpathplugin-bc4sr" [1f387290-1028-4a87-8a5d-26cb403754c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 00:06:07.608037   18113 system_pods.go:89] "etcd-addons-401977" [f49a181f-e850-428b-8412-a15b29a0fafb] Running
	I1217 00:06:07.608045   18113 system_pods.go:89] "kindnet-h5jgb" [6db99c0c-f95c-4610-abb5-b9dbcc985fd7] Running
	I1217 00:06:07.608051   18113 system_pods.go:89] "kube-apiserver-addons-401977" [2c604fe0-534d-4aee-b254-45f298b455f1] Running
	I1217 00:06:07.608056   18113 system_pods.go:89] "kube-controller-manager-addons-401977" [d5432463-cc2b-4d3a-9268-c0fbfdd5272f] Running
	I1217 00:06:07.608066   18113 system_pods.go:89] "kube-ingress-dns-minikube" [fd9e50d9-c944-4528-9420-199a55f88ca6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 00:06:07.608071   18113 system_pods.go:89] "kube-proxy-rgd8j" [7054d552-b932-49a5-83ba-68fd7943c0c4] Running
	I1217 00:06:07.608077   18113 system_pods.go:89] "kube-scheduler-addons-401977" [91fdd9b7-07ba-4338-a158-f5edfdcac7ac] Running
	I1217 00:06:07.608085   18113 system_pods.go:89] "metrics-server-85b7d694d7-krz87" [e7e57a4b-dfdd-48e7-93e6-72b817b73907] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 00:06:07.608094   18113 system_pods.go:89] "nvidia-device-plugin-daemonset-xk8ql" [6f8c2cc8-3d77-495a-902d-fc67c36cde4d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 00:06:07.608103   18113 system_pods.go:89] "registry-6b586f9694-z62qp" [bddc8662-c9eb-4392-837b-010328dd2e70] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 00:06:07.608112   18113 system_pods.go:89] "registry-creds-764b6fb674-5ddkb" [99c64f75-ab3b-49e5-b5d9-f425e95c71c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 00:06:07.608120   18113 system_pods.go:89] "registry-proxy-58fhj" [9b3ecc60-f54f-46fd-8a40-a56e4574bb5b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 00:06:07.608129   18113 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bs9sb" [1002c903-bd7c-4827-8b43-4bb428bbab2b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:06:07.608147   18113 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tqvm8" [35e50a35-6f1d-423a-8a7a-c09636dfbfdb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:06:07.608155   18113 system_pods.go:89] "storage-provisioner" [a2ddca2b-2eea-4f4c-b89d-c0d6966b5fb1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:06:07.608171   18113 retry.go:31] will retry after 241.43571ms: missing components: kube-dns
	I1217 00:06:07.659453   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:07.855137   18113 system_pods.go:86] 20 kube-system pods found
	I1217 00:06:07.855170   18113 system_pods.go:89] "amd-gpu-device-plugin-zhxtw" [39b7820e-9767-4f89-a35e-e8e970dc8ced] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 00:06:07.855183   18113 system_pods.go:89] "coredns-66bc5c9577-pqbbw" [932eceaf-63fa-4947-b6bd-9022183fe57b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:06:07.855194   18113 system_pods.go:89] "csi-hostpath-attacher-0" [cc167fc5-9598-4c16-9567-00a80fc242c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 00:06:07.855202   18113 system_pods.go:89] "csi-hostpath-resizer-0" [f28b0d1f-8e42-4c55-8691-07d3af4af925] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 00:06:07.855213   18113 system_pods.go:89] "csi-hostpathplugin-bc4sr" [1f387290-1028-4a87-8a5d-26cb403754c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 00:06:07.855222   18113 system_pods.go:89] "etcd-addons-401977" [f49a181f-e850-428b-8412-a15b29a0fafb] Running
	I1217 00:06:07.855228   18113 system_pods.go:89] "kindnet-h5jgb" [6db99c0c-f95c-4610-abb5-b9dbcc985fd7] Running
	I1217 00:06:07.855235   18113 system_pods.go:89] "kube-apiserver-addons-401977" [2c604fe0-534d-4aee-b254-45f298b455f1] Running
	I1217 00:06:07.855244   18113 system_pods.go:89] "kube-controller-manager-addons-401977" [d5432463-cc2b-4d3a-9268-c0fbfdd5272f] Running
	I1217 00:06:07.855252   18113 system_pods.go:89] "kube-ingress-dns-minikube" [fd9e50d9-c944-4528-9420-199a55f88ca6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 00:06:07.855257   18113 system_pods.go:89] "kube-proxy-rgd8j" [7054d552-b932-49a5-83ba-68fd7943c0c4] Running
	I1217 00:06:07.855270   18113 system_pods.go:89] "kube-scheduler-addons-401977" [91fdd9b7-07ba-4338-a158-f5edfdcac7ac] Running
	I1217 00:06:07.855278   18113 system_pods.go:89] "metrics-server-85b7d694d7-krz87" [e7e57a4b-dfdd-48e7-93e6-72b817b73907] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 00:06:07.855291   18113 system_pods.go:89] "nvidia-device-plugin-daemonset-xk8ql" [6f8c2cc8-3d77-495a-902d-fc67c36cde4d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 00:06:07.855303   18113 system_pods.go:89] "registry-6b586f9694-z62qp" [bddc8662-c9eb-4392-837b-010328dd2e70] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 00:06:07.855312   18113 system_pods.go:89] "registry-creds-764b6fb674-5ddkb" [99c64f75-ab3b-49e5-b5d9-f425e95c71c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 00:06:07.855320   18113 system_pods.go:89] "registry-proxy-58fhj" [9b3ecc60-f54f-46fd-8a40-a56e4574bb5b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 00:06:07.855333   18113 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bs9sb" [1002c903-bd7c-4827-8b43-4bb428bbab2b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:06:07.855344   18113 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tqvm8" [35e50a35-6f1d-423a-8a7a-c09636dfbfdb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:06:07.855355   18113 system_pods.go:89] "storage-provisioner" [a2ddca2b-2eea-4f4c-b89d-c0d6966b5fb1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:06:07.855373   18113 retry.go:31] will retry after 466.346009ms: missing components: kube-dns
	I1217 00:06:07.865391   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:07.952138   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:08.025706   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:08.160181   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:08.326205   18113 system_pods.go:86] 20 kube-system pods found
	I1217 00:06:08.326242   18113 system_pods.go:89] "amd-gpu-device-plugin-zhxtw" [39b7820e-9767-4f89-a35e-e8e970dc8ced] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 00:06:08.326250   18113 system_pods.go:89] "coredns-66bc5c9577-pqbbw" [932eceaf-63fa-4947-b6bd-9022183fe57b] Running
	I1217 00:06:08.326261   18113 system_pods.go:89] "csi-hostpath-attacher-0" [cc167fc5-9598-4c16-9567-00a80fc242c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 00:06:08.326269   18113 system_pods.go:89] "csi-hostpath-resizer-0" [f28b0d1f-8e42-4c55-8691-07d3af4af925] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 00:06:08.326279   18113 system_pods.go:89] "csi-hostpathplugin-bc4sr" [1f387290-1028-4a87-8a5d-26cb403754c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 00:06:08.326288   18113 system_pods.go:89] "etcd-addons-401977" [f49a181f-e850-428b-8412-a15b29a0fafb] Running
	I1217 00:06:08.326294   18113 system_pods.go:89] "kindnet-h5jgb" [6db99c0c-f95c-4610-abb5-b9dbcc985fd7] Running
	I1217 00:06:08.326302   18113 system_pods.go:89] "kube-apiserver-addons-401977" [2c604fe0-534d-4aee-b254-45f298b455f1] Running
	I1217 00:06:08.326308   18113 system_pods.go:89] "kube-controller-manager-addons-401977" [d5432463-cc2b-4d3a-9268-c0fbfdd5272f] Running
	I1217 00:06:08.326316   18113 system_pods.go:89] "kube-ingress-dns-minikube" [fd9e50d9-c944-4528-9420-199a55f88ca6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 00:06:08.326323   18113 system_pods.go:89] "kube-proxy-rgd8j" [7054d552-b932-49a5-83ba-68fd7943c0c4] Running
	I1217 00:06:08.326328   18113 system_pods.go:89] "kube-scheduler-addons-401977" [91fdd9b7-07ba-4338-a158-f5edfdcac7ac] Running
	I1217 00:06:08.326348   18113 system_pods.go:89] "metrics-server-85b7d694d7-krz87" [e7e57a4b-dfdd-48e7-93e6-72b817b73907] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 00:06:08.326360   18113 system_pods.go:89] "nvidia-device-plugin-daemonset-xk8ql" [6f8c2cc8-3d77-495a-902d-fc67c36cde4d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 00:06:08.326368   18113 system_pods.go:89] "registry-6b586f9694-z62qp" [bddc8662-c9eb-4392-837b-010328dd2e70] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 00:06:08.326377   18113 system_pods.go:89] "registry-creds-764b6fb674-5ddkb" [99c64f75-ab3b-49e5-b5d9-f425e95c71c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 00:06:08.326384   18113 system_pods.go:89] "registry-proxy-58fhj" [9b3ecc60-f54f-46fd-8a40-a56e4574bb5b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 00:06:08.326395   18113 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bs9sb" [1002c903-bd7c-4827-8b43-4bb428bbab2b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:06:08.326404   18113 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tqvm8" [35e50a35-6f1d-423a-8a7a-c09636dfbfdb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:06:08.326410   18113 system_pods.go:89] "storage-provisioner" [a2ddca2b-2eea-4f4c-b89d-c0d6966b5fb1] Running
	I1217 00:06:08.326419   18113 system_pods.go:126] duration metric: took 1.062196423s to wait for k8s-apps to be running ...
	I1217 00:06:08.326431   18113 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 00:06:08.326481   18113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:06:08.341868   18113 system_svc.go:56] duration metric: took 15.425848ms WaitForService to wait for kubelet
	I1217 00:06:08.341899   18113 kubeadm.go:587] duration metric: took 42.76742531s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:06:08.341920   18113 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:06:08.344868   18113 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 00:06:08.344895   18113 node_conditions.go:123] node cpu capacity is 8
	I1217 00:06:08.344919   18113 node_conditions.go:105] duration metric: took 2.992433ms to run NodePressure ...
	I1217 00:06:08.344934   18113 start.go:242] waiting for startup goroutines ...
	I1217 00:06:08.365543   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:08.452517   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:08.553338   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:08.659132   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:08.864864   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:08.951639   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:09.025390   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:09.159255   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:09.367599   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:09.453605   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:09.526283   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:09.660249   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:09.865811   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:09.951716   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:10.026085   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:10.159176   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:10.364908   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:10.451853   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:10.525883   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:10.660106   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:10.865117   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:10.951719   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:11.025835   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:11.160090   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:11.366490   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:11.452340   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:11.526486   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:11.659510   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:11.865010   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:11.951811   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:12.025546   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:12.159663   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:12.365548   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:12.452235   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:12.526321   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:12.659183   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:12.864875   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:12.951822   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:13.025764   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:13.159370   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:13.365291   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:13.451893   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:13.525651   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:13.659615   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:13.865054   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:13.951837   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:14.025674   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:14.159543   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:14.365568   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:14.452285   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:14.526414   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:14.659354   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:14.865402   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:14.951846   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:15.025276   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:15.158985   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:15.366974   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:15.451830   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:15.525717   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:15.659046   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:15.865205   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:15.952632   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:16.025760   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:16.159905   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:16.365525   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:16.452565   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:16.525715   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:16.659649   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:16.865602   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:16.952478   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:17.026409   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:17.159022   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:17.364648   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:17.452299   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:17.525979   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:17.659227   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:17.864189   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:17.951654   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:18.025605   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:18.159380   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:18.365082   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:18.465261   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:18.525671   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:18.659283   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:18.865101   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:18.952891   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:19.026489   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:19.159627   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:19.365444   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:19.452109   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:19.525913   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:19.658738   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:19.864495   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:19.952470   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:20.026510   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:20.159511   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:20.364981   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:20.451084   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:20.525874   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:20.659418   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:20.865489   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:20.951881   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:21.025762   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:21.159696   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:21.365527   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:21.466357   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:21.526614   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:21.659480   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:21.865965   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:21.951788   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:22.025351   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:22.158967   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:22.364114   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:22.451565   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:22.525191   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:22.658441   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:22.865039   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:22.951125   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:23.026094   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:23.161422   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:23.365503   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:23.452160   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:23.526183   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:23.659113   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:23.864567   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:23.952267   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:24.026352   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:24.158923   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:24.366209   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:24.451695   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:24.525236   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:24.658792   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:24.864020   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:24.951589   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:25.025121   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:25.158466   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:25.365148   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:25.451346   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:25.525890   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:25.658495   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:25.864974   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:25.951645   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:26.025982   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:26.158353   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:26.365190   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:26.465250   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:26.566661   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:26.659433   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:26.864928   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:26.951395   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:27.026640   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:27.158367   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:27.364475   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:27.452325   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:27.526389   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:27.659142   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:27.865682   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:27.952721   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:28.029312   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:28.158685   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:28.365839   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:28.451254   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:28.526329   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:28.659278   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:28.864795   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:28.951400   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:29.026232   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:29.158859   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:29.365691   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:29.452526   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:29.525354   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:29.659249   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:29.865088   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:29.965453   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:30.025863   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:30.159428   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:30.364727   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:30.452575   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:30.525927   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:30.658810   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:30.865140   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:30.951911   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:31.026015   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:31.158713   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:31.364916   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:31.451116   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:31.526361   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:31.660077   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:31.865852   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:31.951774   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:32.026040   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:32.160204   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:32.365428   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:32.470620   18113 kapi.go:107] duration metric: took 1m5.52190366s to wait for kubernetes.io/minikube-addons=registry ...
	I1217 00:06:32.525774   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:32.659058   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:32.864916   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:33.025746   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:33.159612   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:33.365335   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:33.526754   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:33.659849   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:33.865120   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:34.026280   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:34.159264   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:34.364974   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:34.525983   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:34.658606   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:34.864972   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:35.025836   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:35.158495   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:35.365713   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:35.526105   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:35.659219   18113 kapi.go:107] duration metric: took 1m8.503534052s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1217 00:06:35.864813   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:36.025513   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:36.366379   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:36.527468   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:36.865738   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:37.025970   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:37.364918   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:37.525808   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:37.864778   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:38.026063   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:38.364576   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:38.525750   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:38.864410   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:39.026484   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:39.365919   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:39.525785   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:39.865462   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:40.025699   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:40.365554   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:40.525707   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:40.864487   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:41.025586   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:41.365046   18113 kapi.go:107] duration metric: took 1m7.503198603s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1217 00:06:41.366449   18113 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-401977 cluster.
	I1217 00:06:41.367689   18113 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1217 00:06:41.369013   18113 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1217 00:06:41.526531   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:42.026361   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:42.571319   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:43.025480   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:43.526126   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:44.025659   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:44.526445   18113 kapi.go:107] duration metric: took 1m17.004079884s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1217 00:06:44.528247   18113 out.go:179] * Enabled addons: amd-gpu-device-plugin, registry-creds, inspektor-gadget, storage-provisioner, nvidia-device-plugin, cloud-spanner, ingress-dns, metrics-server, storage-provisioner-rancher, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1217 00:06:44.529442   18113 addons.go:530] duration metric: took 1m18.954939299s for enable addons: enabled=[amd-gpu-device-plugin registry-creds inspektor-gadget storage-provisioner nvidia-device-plugin cloud-spanner ingress-dns metrics-server storage-provisioner-rancher yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1217 00:06:44.529485   18113 start.go:247] waiting for cluster config update ...
	I1217 00:06:44.529502   18113 start.go:256] writing updated cluster config ...
	I1217 00:06:44.529747   18113 ssh_runner.go:195] Run: rm -f paused
	I1217 00:06:44.533603   18113 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:06:44.536253   18113 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pqbbw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:44.539679   18113 pod_ready.go:94] pod "coredns-66bc5c9577-pqbbw" is "Ready"
	I1217 00:06:44.539697   18113 pod_ready.go:86] duration metric: took 3.423059ms for pod "coredns-66bc5c9577-pqbbw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:44.541350   18113 pod_ready.go:83] waiting for pod "etcd-addons-401977" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:44.544185   18113 pod_ready.go:94] pod "etcd-addons-401977" is "Ready"
	I1217 00:06:44.544199   18113 pod_ready.go:86] duration metric: took 2.833386ms for pod "etcd-addons-401977" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:44.545660   18113 pod_ready.go:83] waiting for pod "kube-apiserver-addons-401977" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:44.548469   18113 pod_ready.go:94] pod "kube-apiserver-addons-401977" is "Ready"
	I1217 00:06:44.548488   18113 pod_ready.go:86] duration metric: took 2.813397ms for pod "kube-apiserver-addons-401977" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:44.550147   18113 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-401977" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:44.937691   18113 pod_ready.go:94] pod "kube-controller-manager-addons-401977" is "Ready"
	I1217 00:06:44.937715   18113 pod_ready.go:86] duration metric: took 387.551285ms for pod "kube-controller-manager-addons-401977" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:45.137259   18113 pod_ready.go:83] waiting for pod "kube-proxy-rgd8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:45.536602   18113 pod_ready.go:94] pod "kube-proxy-rgd8j" is "Ready"
	I1217 00:06:45.536634   18113 pod_ready.go:86] duration metric: took 399.354868ms for pod "kube-proxy-rgd8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:45.738018   18113 pod_ready.go:83] waiting for pod "kube-scheduler-addons-401977" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:46.137572   18113 pod_ready.go:94] pod "kube-scheduler-addons-401977" is "Ready"
	I1217 00:06:46.137599   18113 pod_ready.go:86] duration metric: took 399.553329ms for pod "kube-scheduler-addons-401977" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:46.137611   18113 pod_ready.go:40] duration metric: took 1.603984915s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:06:46.179884   18113 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1217 00:06:46.181726   18113 out.go:179] * Done! kubectl is now configured to use "addons-401977" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 00:09:21 addons-401977 crio[774]: time="2025-12-17T00:09:21.130716329Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-r867g/POD" id=2482127e-7b53-400a-8e0a-cdf2f87c945a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:09:21 addons-401977 crio[774]: time="2025-12-17T00:09:21.130800978Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:09:21 addons-401977 crio[774]: time="2025-12-17T00:09:21.138706612Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-r867g Namespace:default ID:dfa117d24cbbb51c5486eab5b8f548a04e16491043732c308d0e72d618c8b855 UID:7518e48a-3661-42ad-89a3-b2ab2c8fc34a NetNS:/var/run/netns/217e6b4a-b08a-4833-bcd2-3501962e46ee Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000ad02d0}] Aliases:map[]}"
	Dec 17 00:09:21 addons-401977 crio[774]: time="2025-12-17T00:09:21.138733616Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-r867g to CNI network \"kindnet\" (type=ptp)"
	Dec 17 00:09:21 addons-401977 crio[774]: time="2025-12-17T00:09:21.148703615Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-r867g Namespace:default ID:dfa117d24cbbb51c5486eab5b8f548a04e16491043732c308d0e72d618c8b855 UID:7518e48a-3661-42ad-89a3-b2ab2c8fc34a NetNS:/var/run/netns/217e6b4a-b08a-4833-bcd2-3501962e46ee Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000ad02d0}] Aliases:map[]}"
	Dec 17 00:09:21 addons-401977 crio[774]: time="2025-12-17T00:09:21.148826633Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-r867g for CNI network kindnet (type=ptp)"
	Dec 17 00:09:21 addons-401977 crio[774]: time="2025-12-17T00:09:21.149594329Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 00:09:21 addons-401977 crio[774]: time="2025-12-17T00:09:21.150432851Z" level=info msg="Ran pod sandbox dfa117d24cbbb51c5486eab5b8f548a04e16491043732c308d0e72d618c8b855 with infra container: default/hello-world-app-5d498dc89-r867g/POD" id=2482127e-7b53-400a-8e0a-cdf2f87c945a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:09:21 addons-401977 crio[774]: time="2025-12-17T00:09:21.151593152Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=0220918c-f785-45db-b4a2-5195f171192e name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:09:21 addons-401977 crio[774]: time="2025-12-17T00:09:21.151721924Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=0220918c-f785-45db-b4a2-5195f171192e name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:09:21 addons-401977 crio[774]: time="2025-12-17T00:09:21.151782714Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=0220918c-f785-45db-b4a2-5195f171192e name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:09:21 addons-401977 crio[774]: time="2025-12-17T00:09:21.152497256Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=d701a718-7748-496e-be29-ac4d9395d3f8 name=/runtime.v1.ImageService/PullImage
	Dec 17 00:09:21 addons-401977 crio[774]: time="2025-12-17T00:09:21.157298618Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 17 00:09:22 addons-401977 crio[774]: time="2025-12-17T00:09:22.096257873Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=d701a718-7748-496e-be29-ac4d9395d3f8 name=/runtime.v1.ImageService/PullImage
	Dec 17 00:09:22 addons-401977 crio[774]: time="2025-12-17T00:09:22.096764898Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f97bc5d5-ba1f-416f-8f3a-0c4f2152f9b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:09:22 addons-401977 crio[774]: time="2025-12-17T00:09:22.098057656Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=289bee76-5183-4709-aa2b-c0cf8eac8ca7 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:09:22 addons-401977 crio[774]: time="2025-12-17T00:09:22.101634843Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-r867g/hello-world-app" id=35aaac66-1947-49a1-9827-28c780dd2392 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:09:22 addons-401977 crio[774]: time="2025-12-17T00:09:22.101760677Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:09:22 addons-401977 crio[774]: time="2025-12-17T00:09:22.109288742Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:09:22 addons-401977 crio[774]: time="2025-12-17T00:09:22.109432793Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4fa97fc94b14fcc99c30a8f3bd8910b18b9b7070ff8b6e217405718d2e789696/merged/etc/passwd: no such file or directory"
	Dec 17 00:09:22 addons-401977 crio[774]: time="2025-12-17T00:09:22.109456484Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4fa97fc94b14fcc99c30a8f3bd8910b18b9b7070ff8b6e217405718d2e789696/merged/etc/group: no such file or directory"
	Dec 17 00:09:22 addons-401977 crio[774]: time="2025-12-17T00:09:22.109650332Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:09:22 addons-401977 crio[774]: time="2025-12-17T00:09:22.151286613Z" level=info msg="Created container d82bada9cc2b77025f32aff122c607e4a3cc53d019cfa49311cfd2ffb6918023: default/hello-world-app-5d498dc89-r867g/hello-world-app" id=35aaac66-1947-49a1-9827-28c780dd2392 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:09:22 addons-401977 crio[774]: time="2025-12-17T00:09:22.151827631Z" level=info msg="Starting container: d82bada9cc2b77025f32aff122c607e4a3cc53d019cfa49311cfd2ffb6918023" id=36490005-864b-416c-9f3e-5d487cb2b89b name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:09:22 addons-401977 crio[774]: time="2025-12-17T00:09:22.153735369Z" level=info msg="Started container" PID=9561 containerID=d82bada9cc2b77025f32aff122c607e4a3cc53d019cfa49311cfd2ffb6918023 description=default/hello-world-app-5d498dc89-r867g/hello-world-app id=36490005-864b-416c-9f3e-5d487cb2b89b name=/runtime.v1.RuntimeService/StartContainer sandboxID=dfa117d24cbbb51c5486eab5b8f548a04e16491043732c308d0e72d618c8b855
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	d82bada9cc2b7       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   dfa117d24cbbb       hello-world-app-5d498dc89-r867g             default
	7d625ea082756       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             2 minutes ago            Running             registry-creds                           0                   4cfb7be939f7b       registry-creds-764b6fb674-5ddkb             kube-system
	cc6ac6503dcaa       public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c                                           2 minutes ago            Running             nginx                                    0                   305ce77c3e359       nginx                                       default
	103255e2d8ad9       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   7223162eccc2c       busybox                                     default
	ad54e02660b07       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   4bc127d16a688       csi-hostpathplugin-bc4sr                    kube-system
	45b944b097394       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   4bc127d16a688       csi-hostpathplugin-bc4sr                    kube-system
	208d6abcaa9c4       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   6f4e36339936c       gcp-auth-78565c9fb4-gbjdx                   gcp-auth
	96a32ce198ee5       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   4bc127d16a688       csi-hostpathplugin-bc4sr                    kube-system
	6060f042efdda       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   4bc127d16a688       csi-hostpathplugin-bc4sr                    kube-system
	eb167c0e68a1e       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            2 minutes ago            Running             gadget                                   0                   1b43f20b34f0e       gadget-8kklk                                gadget
	cfb270351b771       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   4bc127d16a688       csi-hostpathplugin-bc4sr                    kube-system
	96fc010332893       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             2 minutes ago            Running             controller                               0                   29ddfda06c85b       ingress-nginx-controller-85d4c799dd-2ntv7   ingress-nginx
	60a76e334b179       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago            Running             registry-proxy                           0                   0f9e72c9e7c88       registry-proxy-58fhj                        kube-system
	404a83db71038       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   2 minutes ago            Running             csi-external-health-monitor-controller   0                   4bc127d16a688       csi-hostpathplugin-bc4sr                    kube-system
	7477be1e8e83d       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     2 minutes ago            Running             nvidia-device-plugin-ctr                 0                   85327ca941bc7       nvidia-device-plugin-daemonset-xk8ql        kube-system
	88b4569360ba9       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      2 minutes ago            Running             volume-snapshot-controller               0                   9cc1ce53b231c       snapshot-controller-7d9fbc56b8-bs9sb        kube-system
	7ad73ae76171d       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     2 minutes ago            Running             amd-gpu-device-plugin                    0                   4d11e99e93013       amd-gpu-device-plugin-zhxtw                 kube-system
	9032486c9c486       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   2 minutes ago            Exited              patch                                    0                   313ae24dd8dd0       ingress-nginx-admission-patch-md92j         ingress-nginx
	e77e53ca2e567       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              2 minutes ago            Running             csi-resizer                              0                   84c2ab48daa1f       csi-hostpath-resizer-0                      kube-system
	d5d932c1082d3       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             2 minutes ago            Running             csi-attacher                             0                   cad03d8857931       csi-hostpath-attacher-0                     kube-system
	5d7bc94a6e762       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      2 minutes ago            Running             volume-snapshot-controller               0                   4f1c88f6d0153       snapshot-controller-7d9fbc56b8-tqvm8        kube-system
	58060993a0fdf       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   3 minutes ago            Exited              create                                   0                   c5cd8d8303910       ingress-nginx-admission-create-9xxch        ingress-nginx
	1875adcd7ee31       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago            Running             cloud-spanner-emulator                   0                   147648c7aa617       cloud-spanner-emulator-5bdddb765-z68hl      default
	e46ae5d0bae44       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   56380a7dc8c65       local-path-provisioner-648f6765c9-pxnl9     local-path-storage
	0abee2adb5882       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   579a4dfb1f9b3       yakd-dashboard-5ff678cb9-m4h6g              yakd-dashboard
	b3c5366ec83c7       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   61d76bc91493a       registry-6b586f9694-z62qp                   kube-system
	dfd9e15edab91       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   6e8291a8acd06       kube-ingress-dns-minikube                   kube-system
	a380c22257b5c       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   a0a96b039d496       metrics-server-85b7d694d7-krz87             kube-system
	f6e58bb2900bb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   83d2029d2973f       coredns-66bc5c9577-pqbbw                    kube-system
	383049ced70e6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   643729572f86b       storage-provisioner                         kube-system
	840301d1bb594       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             3 minutes ago            Running             kindnet-cni                              0                   2361da5dc56fe       kindnet-h5jgb                               kube-system
	950dc7c477829       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             3 minutes ago            Running             kube-proxy                               0                   fb57c0e4200cc       kube-proxy-rgd8j                            kube-system
	c85efeb6af746       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             4 minutes ago            Running             kube-scheduler                           0                   d9c7b0fa863c0       kube-scheduler-addons-401977                kube-system
	f55d4645a3da6       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             4 minutes ago            Running             kube-controller-manager                  0                   bafe11db410e1       kube-controller-manager-addons-401977       kube-system
	a9fb6926bb935       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             4 minutes ago            Running             etcd                                     0                   67241c7cf5b2d       etcd-addons-401977                          kube-system
	a5dab92a052f8       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             4 minutes ago            Running             kube-apiserver                           0                   d0fb3406ff2c3       kube-apiserver-addons-401977                kube-system
	
	
	==> coredns [f6e58bb2900bb7013f6f81ccca2250cb1b6547be3edcc33d4bc867ae9d0b4072] <==
	[INFO] 10.244.0.22:60594 - 57330 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000212699s
	[INFO] 10.244.0.22:46058 - 51217 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004393543s
	[INFO] 10.244.0.22:38674 - 18855 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.00542322s
	[INFO] 10.244.0.22:57292 - 51412 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005239056s
	[INFO] 10.244.0.22:44793 - 2544 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006119828s
	[INFO] 10.244.0.22:50014 - 52201 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003939884s
	[INFO] 10.244.0.22:34804 - 7486 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005158845s
	[INFO] 10.244.0.22:54900 - 64521 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00199762s
	[INFO] 10.244.0.22:43937 - 25315 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002283867s
	[INFO] 10.244.0.25:58581 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000174326s
	[INFO] 10.244.0.25:48126 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000133432s
	[INFO] 10.244.0.26:52248 - 25404 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000237148s
	[INFO] 10.244.0.26:49851 - 40734 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000298041s
	[INFO] 10.244.0.26:39961 - 36166 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000100385s
	[INFO] 10.244.0.26:33756 - 6133 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000123786s
	[INFO] 10.244.0.26:45244 - 2136 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000090679s
	[INFO] 10.244.0.26:43500 - 54752 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000108303s
	[INFO] 10.244.0.26:59766 - 16260 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004884785s
	[INFO] 10.244.0.26:38430 - 12825 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.008323795s
	[INFO] 10.244.0.26:60589 - 18737 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004926328s
	[INFO] 10.244.0.26:60361 - 7916 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005252734s
	[INFO] 10.244.0.26:47728 - 34006 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004105004s
	[INFO] 10.244.0.26:58085 - 39006 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.017639894s
	[INFO] 10.244.0.26:57421 - 20765 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001680894s
	[INFO] 10.244.0.26:38894 - 48069 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.002033146s
	
	
	==> describe nodes <==
	Name:               addons-401977
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-401977
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=addons-401977
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_05_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-401977
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-401977"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:05:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-401977
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:09:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:08:43 +0000   Wed, 17 Dec 2025 00:05:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:08:43 +0000   Wed, 17 Dec 2025 00:05:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:08:43 +0000   Wed, 17 Dec 2025 00:05:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 00:08:43 +0000   Wed, 17 Dec 2025 00:06:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-401977
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                02cdd826-ad7f-4fd9-ac65-a0cc01c6f3f3
	  Boot ID:                    0e9cedc6-c46e-4354-b3d2-9272a8b33ae5
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m36s
	  default                     cloud-spanner-emulator-5bdddb765-z68hl       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  default                     hello-world-app-5d498dc89-r867g              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gadget                      gadget-8kklk                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  gcp-auth                    gcp-auth-78565c9fb4-gbjdx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-2ntv7    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         3m55s
	  kube-system                 amd-gpu-device-plugin-zhxtw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m16s
	  kube-system                 coredns-66bc5c9577-pqbbw                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m57s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 csi-hostpathplugin-bc4sr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m16s
	  kube-system                 etcd-addons-401977                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m4s
	  kube-system                 kindnet-h5jgb                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m57s
	  kube-system                 kube-apiserver-addons-401977                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-controller-manager-addons-401977        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 kube-proxy-rgd8j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 kube-scheduler-addons-401977                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 metrics-server-85b7d694d7-krz87              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         3m56s
	  kube-system                 nvidia-device-plugin-daemonset-xk8ql         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m16s
	  kube-system                 registry-6b586f9694-z62qp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 registry-creds-764b6fb674-5ddkb              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 registry-proxy-58fhj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m16s
	  kube-system                 snapshot-controller-7d9fbc56b8-bs9sb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 snapshot-controller-7d9fbc56b8-tqvm8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  local-path-storage          local-path-provisioner-648f6765c9-pxnl9      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-m4h6g               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     3m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m55s  kube-proxy       
	  Normal  Starting                 4m3s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m2s   kubelet          Node addons-401977 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s   kubelet          Node addons-401977 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s   kubelet          Node addons-401977 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m58s  node-controller  Node addons-401977 event: Registered Node addons-401977 in Controller
	  Normal  NodeReady                3m16s  kubelet          Node addons-401977 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.089382] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024236] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.864694] kauditd_printk_skb: 47 callbacks suppressed
	[Dec17 00:07] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.006904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +2.048755] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +4.030595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +8.447143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[ +16.382404] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000015] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[Dec17 00:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	
	
	==> etcd [a9fb6926bb935bb35c23586c7a59d3ecc32fdac56e6508767e75f4b3b5db4340] <==
	{"level":"warn","ts":"2025-12-17T00:05:17.117815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.125867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.134302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.142107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.156065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.160769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.167426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.175029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.182425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.188716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.195786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.203872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.210939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.218227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.225251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.241134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.247717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.254720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.306961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:27.845975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:27.852167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:54.708690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:54.715226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:54.730530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:54.736778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54434","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [208d6abcaa9c4d4908630c30283b4b76225353c0a6f9a3858458a258bc072371] <==
	2025/12/17 00:06:40 GCP Auth Webhook started!
	2025/12/17 00:06:46 Ready to marshal response ...
	2025/12/17 00:06:46 Ready to write response ...
	2025/12/17 00:06:46 Ready to marshal response ...
	2025/12/17 00:06:46 Ready to write response ...
	2025/12/17 00:06:46 Ready to marshal response ...
	2025/12/17 00:06:46 Ready to write response ...
	2025/12/17 00:06:57 Ready to marshal response ...
	2025/12/17 00:06:57 Ready to write response ...
	2025/12/17 00:07:04 Ready to marshal response ...
	2025/12/17 00:07:04 Ready to write response ...
	2025/12/17 00:07:13 Ready to marshal response ...
	2025/12/17 00:07:13 Ready to write response ...
	2025/12/17 00:07:13 Ready to marshal response ...
	2025/12/17 00:07:13 Ready to write response ...
	2025/12/17 00:07:19 Ready to marshal response ...
	2025/12/17 00:07:19 Ready to write response ...
	2025/12/17 00:07:22 Ready to marshal response ...
	2025/12/17 00:07:22 Ready to write response ...
	2025/12/17 00:07:28 Ready to marshal response ...
	2025/12/17 00:07:28 Ready to write response ...
	2025/12/17 00:09:20 Ready to marshal response ...
	2025/12/17 00:09:20 Ready to write response ...
	
	
	==> kernel <==
	 00:09:22 up 51 min,  0 user,  load average: 0.42, 0.77, 0.40
	Linux addons-401977 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [840301d1bb594051e430e85719c2707ed97013c7e3269f84012213ab768d9935] <==
	I1217 00:07:16.400312       1 main.go:301] handling current node
	I1217 00:07:26.399383       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:07:26.399414       1 main.go:301] handling current node
	I1217 00:07:36.399492       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:07:36.399548       1 main.go:301] handling current node
	I1217 00:07:46.400273       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:07:46.400301       1 main.go:301] handling current node
	I1217 00:07:56.404085       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:07:56.404124       1 main.go:301] handling current node
	I1217 00:08:06.404226       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:08:06.404258       1 main.go:301] handling current node
	I1217 00:08:16.401188       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:08:16.401217       1 main.go:301] handling current node
	I1217 00:08:26.399390       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:08:26.399425       1 main.go:301] handling current node
	I1217 00:08:36.399826       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:08:36.399862       1 main.go:301] handling current node
	I1217 00:08:46.399573       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:08:46.399616       1 main.go:301] handling current node
	I1217 00:08:56.399385       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:08:56.399413       1 main.go:301] handling current node
	I1217 00:09:06.399684       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:09:06.399709       1 main.go:301] handling current node
	I1217 00:09:16.399751       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:09:16.399796       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a5dab92a052f84df44c207e5dd5c238be41faadb22b92368fa8135c2af2fd265] <==
	 > logger="UnhandledError"
	E1217 00:06:11.089698       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.69.233:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.69.233:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.69.233:443: connect: connection refused" logger="UnhandledError"
	E1217 00:06:11.091561       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.69.233:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.69.233:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.69.233:443: connect: connection refused" logger="UnhandledError"
	W1217 00:06:12.090510       1 handler_proxy.go:99] no RequestInfo found in the context
	E1217 00:06:12.090545       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1217 00:06:12.090558       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1217 00:06:12.090593       1 handler_proxy.go:99] no RequestInfo found in the context
	E1217 00:06:12.090657       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1217 00:06:12.091767       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1217 00:06:16.103225       1 handler_proxy.go:99] no RequestInfo found in the context
	E1217 00:06:16.103276       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1217 00:06:16.103332       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.69.233:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.69.233:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1217 00:06:16.117012       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1217 00:06:53.864145       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39138: use of closed network connection
	E1217 00:06:54.003628       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39154: use of closed network connection
	I1217 00:06:56.911606       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1217 00:06:57.079240       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.219.224"}
	I1217 00:07:25.611948       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1217 00:09:20.901892       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.126.59"}
	
	
	==> kube-controller-manager [f55d4645a3da61635311b8471dafe61926de73f6c6575bcdc112a086cfde666a] <==
	I1217 00:05:24.694965       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 00:05:24.694979       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1217 00:05:24.695051       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 00:05:24.695056       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 00:05:24.695055       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 00:05:24.695577       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 00:05:24.696902       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 00:05:24.696930       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 00:05:24.699155       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 00:05:24.702105       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1217 00:05:24.702163       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1217 00:05:24.702192       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 00:05:24.702199       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 00:05:24.702204       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 00:05:24.703235       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 00:05:24.707850       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-401977" podCIDRs=["10.244.0.0/24"]
	I1217 00:05:24.713832       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1217 00:05:54.703218       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1217 00:05:54.703387       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1217 00:05:54.703444       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1217 00:05:54.720901       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1217 00:05:54.724404       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1217 00:05:54.803905       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 00:05:54.825100       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 00:06:09.655523       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [950dc7c477829d5fc62b7e10ed2edf92016de18feb8bc6c8d8262fbf28097b78] <==
	I1217 00:05:25.987933       1 server_linux.go:53] "Using iptables proxy"
	I1217 00:05:26.237322       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 00:05:26.340080       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 00:05:26.341801       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1217 00:05:26.341951       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 00:05:26.620331       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 00:05:26.620449       1 server_linux.go:132] "Using iptables Proxier"
	I1217 00:05:26.741131       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 00:05:26.751538       1 server.go:527] "Version info" version="v1.34.2"
	I1217 00:05:26.751683       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:05:26.759899       1 config.go:200] "Starting service config controller"
	I1217 00:05:26.760020       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 00:05:26.760380       1 config.go:106] "Starting endpoint slice config controller"
	I1217 00:05:26.760462       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 00:05:26.760890       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 00:05:26.761185       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 00:05:26.761771       1 config.go:309] "Starting node config controller"
	I1217 00:05:26.762040       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 00:05:26.762086       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 00:05:26.861220       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 00:05:26.861297       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 00:05:26.862380       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c85efeb6af746eaf16f9b1ef2458c5065693555ece6e3b595a07ccc7b8c2e6d9] <==
	I1217 00:05:18.235429       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:05:18.237477       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:05:18.237510       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:05:18.237712       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 00:05:18.237739       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1217 00:05:18.239147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1217 00:05:18.240209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 00:05:18.242207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 00:05:18.242272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 00:05:18.242314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 00:05:18.242571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 00:05:18.242634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 00:05:18.242607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 00:05:18.242661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 00:05:18.242665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 00:05:18.242749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 00:05:18.242756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 00:05:18.242835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 00:05:18.242842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 00:05:18.242856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 00:05:18.242893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 00:05:18.242904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 00:05:18.242923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 00:05:18.242946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1217 00:05:19.438460       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 00:07:36 addons-401977 kubelet[1295]: I1217 00:07:36.165507    1295 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^5fbad2b2-dadc-11f0-87f2-3a612587efb9\") pod \"9cd585f4-62dd-4d3c-a9ad-df02a7c0d6d0\" (UID: \"9cd585f4-62dd-4d3c-a9ad-df02a7c0d6d0\") "
	Dec 17 00:07:36 addons-401977 kubelet[1295]: I1217 00:07:36.165545    1295 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvrsp\" (UniqueName: \"kubernetes.io/projected/9cd585f4-62dd-4d3c-a9ad-df02a7c0d6d0-kube-api-access-tvrsp\") pod \"9cd585f4-62dd-4d3c-a9ad-df02a7c0d6d0\" (UID: \"9cd585f4-62dd-4d3c-a9ad-df02a7c0d6d0\") "
	Dec 17 00:07:36 addons-401977 kubelet[1295]: I1217 00:07:36.165549    1295 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cd585f4-62dd-4d3c-a9ad-df02a7c0d6d0-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "9cd585f4-62dd-4d3c-a9ad-df02a7c0d6d0" (UID: "9cd585f4-62dd-4d3c-a9ad-df02a7c0d6d0"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 17 00:07:36 addons-401977 kubelet[1295]: I1217 00:07:36.165706    1295 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9cd585f4-62dd-4d3c-a9ad-df02a7c0d6d0-gcp-creds\") on node \"addons-401977\" DevicePath \"\""
	Dec 17 00:07:36 addons-401977 kubelet[1295]: I1217 00:07:36.168081    1295 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cd585f4-62dd-4d3c-a9ad-df02a7c0d6d0-kube-api-access-tvrsp" (OuterVolumeSpecName: "kube-api-access-tvrsp") pod "9cd585f4-62dd-4d3c-a9ad-df02a7c0d6d0" (UID: "9cd585f4-62dd-4d3c-a9ad-df02a7c0d6d0"). InnerVolumeSpecName "kube-api-access-tvrsp". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 17 00:07:36 addons-401977 kubelet[1295]: I1217 00:07:36.168567    1295 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^5fbad2b2-dadc-11f0-87f2-3a612587efb9" (OuterVolumeSpecName: "task-pv-storage") pod "9cd585f4-62dd-4d3c-a9ad-df02a7c0d6d0" (UID: "9cd585f4-62dd-4d3c-a9ad-df02a7c0d6d0"). InnerVolumeSpecName "pvc-45892b1a-8e22-4a79-abbd-f98fcbf9de96". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 17 00:07:36 addons-401977 kubelet[1295]: I1217 00:07:36.266836    1295 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tvrsp\" (UniqueName: \"kubernetes.io/projected/9cd585f4-62dd-4d3c-a9ad-df02a7c0d6d0-kube-api-access-tvrsp\") on node \"addons-401977\" DevicePath \"\""
	Dec 17 00:07:36 addons-401977 kubelet[1295]: I1217 00:07:36.266902    1295 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-45892b1a-8e22-4a79-abbd-f98fcbf9de96\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^5fbad2b2-dadc-11f0-87f2-3a612587efb9\") on node \"addons-401977\" "
	Dec 17 00:07:36 addons-401977 kubelet[1295]: I1217 00:07:36.271263    1295 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-45892b1a-8e22-4a79-abbd-f98fcbf9de96" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^5fbad2b2-dadc-11f0-87f2-3a612587efb9") on node "addons-401977"
	Dec 17 00:07:36 addons-401977 kubelet[1295]: I1217 00:07:36.367925    1295 reconciler_common.go:299] "Volume detached for volume \"pvc-45892b1a-8e22-4a79-abbd-f98fcbf9de96\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^5fbad2b2-dadc-11f0-87f2-3a612587efb9\") on node \"addons-401977\" DevicePath \"\""
	Dec 17 00:07:36 addons-401977 kubelet[1295]: I1217 00:07:36.465384    1295 scope.go:117] "RemoveContainer" containerID="7ede2615b13ab88c23eb2405c37f6acc2db89d2c48d07935dde7fd65fd5712c3"
	Dec 17 00:07:36 addons-401977 kubelet[1295]: I1217 00:07:36.474794    1295 scope.go:117] "RemoveContainer" containerID="7ede2615b13ab88c23eb2405c37f6acc2db89d2c48d07935dde7fd65fd5712c3"
	Dec 17 00:07:36 addons-401977 kubelet[1295]: E1217 00:07:36.475315    1295 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ede2615b13ab88c23eb2405c37f6acc2db89d2c48d07935dde7fd65fd5712c3\": container with ID starting with 7ede2615b13ab88c23eb2405c37f6acc2db89d2c48d07935dde7fd65fd5712c3 not found: ID does not exist" containerID="7ede2615b13ab88c23eb2405c37f6acc2db89d2c48d07935dde7fd65fd5712c3"
	Dec 17 00:07:36 addons-401977 kubelet[1295]: I1217 00:07:36.475396    1295 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ede2615b13ab88c23eb2405c37f6acc2db89d2c48d07935dde7fd65fd5712c3"} err="failed to get container status \"7ede2615b13ab88c23eb2405c37f6acc2db89d2c48d07935dde7fd65fd5712c3\": rpc error: code = NotFound desc = could not find container \"7ede2615b13ab88c23eb2405c37f6acc2db89d2c48d07935dde7fd65fd5712c3\": container with ID starting with 7ede2615b13ab88c23eb2405c37f6acc2db89d2c48d07935dde7fd65fd5712c3 not found: ID does not exist"
	Dec 17 00:07:37 addons-401977 kubelet[1295]: I1217 00:07:37.933575    1295 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cd585f4-62dd-4d3c-a9ad-df02a7c0d6d0" path="/var/lib/kubelet/pods/9cd585f4-62dd-4d3c-a9ad-df02a7c0d6d0/volumes"
	Dec 17 00:07:45 addons-401977 kubelet[1295]: I1217 00:07:45.931527    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-xk8ql" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 00:07:50 addons-401977 kubelet[1295]: I1217 00:07:50.930788    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-58fhj" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 00:07:51 addons-401977 kubelet[1295]: I1217 00:07:51.931309    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-zhxtw" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 00:08:19 addons-401977 kubelet[1295]: I1217 00:08:19.971488    1295 scope.go:117] "RemoveContainer" containerID="a0f9f9e9a9fc351f8214afda4386b5ee28da46db4dcaba1a34a1edd78727628e"
	Dec 17 00:08:19 addons-401977 kubelet[1295]: I1217 00:08:19.980775    1295 scope.go:117] "RemoveContainer" containerID="ad581cfe93aabdaaeb169edb6be6fc951e5d9c8f7847b987cdbf112e3afe4f2a"
	Dec 17 00:08:55 addons-401977 kubelet[1295]: I1217 00:08:55.931482    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-zhxtw" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 00:09:03 addons-401977 kubelet[1295]: I1217 00:09:03.930854    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-58fhj" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 00:09:05 addons-401977 kubelet[1295]: I1217 00:09:05.931454    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-xk8ql" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 00:09:20 addons-401977 kubelet[1295]: I1217 00:09:20.886364    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7518e48a-3661-42ad-89a3-b2ab2c8fc34a-gcp-creds\") pod \"hello-world-app-5d498dc89-r867g\" (UID: \"7518e48a-3661-42ad-89a3-b2ab2c8fc34a\") " pod="default/hello-world-app-5d498dc89-r867g"
	Dec 17 00:09:20 addons-401977 kubelet[1295]: I1217 00:09:20.886423    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvttq\" (UniqueName: \"kubernetes.io/projected/7518e48a-3661-42ad-89a3-b2ab2c8fc34a-kube-api-access-dvttq\") pod \"hello-world-app-5d498dc89-r867g\" (UID: \"7518e48a-3661-42ad-89a3-b2ab2c8fc34a\") " pod="default/hello-world-app-5d498dc89-r867g"
	
	
	==> storage-provisioner [383049ced70e6adc7b2ef0d1a415cd527a2670898bfb2873c9b8955afffff3eb] <==
	W1217 00:08:57.925731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:08:59.928646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:08:59.936480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:09:01.939200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:09:01.942506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:09:03.945166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:09:03.948313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:09:05.950881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:09:05.954697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:09:07.957739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:09:07.960922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:09:09.963398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:09:09.968087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:09:11.970591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:09:11.973898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:09:13.976604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:09:13.979911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:09:15.982429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:09:15.986899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:09:17.989744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:09:17.992985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:09:19.995960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:09:19.999866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:09:22.002571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:09:22.007717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-401977 -n addons-401977
helpers_test.go:270: (dbg) Run:  kubectl --context addons-401977 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-9xxch ingress-nginx-admission-patch-md92j
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-401977 describe pod ingress-nginx-admission-create-9xxch ingress-nginx-admission-patch-md92j
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-401977 describe pod ingress-nginx-admission-create-9xxch ingress-nginx-admission-patch-md92j: exit status 1 (52.616211ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-9xxch" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-md92j" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-401977 describe pod ingress-nginx-admission-create-9xxch ingress-nginx-admission-patch-md92j: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-401977 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-401977 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (229.174365ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:09:23.280595   32329 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:09:23.281090   32329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:09:23.281100   32329 out.go:374] Setting ErrFile to fd 2...
	I1217 00:09:23.281105   32329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:09:23.281307   32329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:09:23.281553   32329 mustload.go:66] Loading cluster: addons-401977
	I1217 00:09:23.281872   32329 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:09:23.281890   32329 addons.go:622] checking whether the cluster is paused
	I1217 00:09:23.281970   32329 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:09:23.281982   32329 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:09:23.282357   32329 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:09:23.300596   32329 ssh_runner.go:195] Run: systemctl --version
	I1217 00:09:23.300639   32329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:09:23.318068   32329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:09:23.408151   32329 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:09:23.408242   32329 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:09:23.436345   32329 cri.go:89] found id: "7d625ea0827567c1ecf6072d9f705ee4a0e2896e259825605202bb193195e013"
	I1217 00:09:23.436364   32329 cri.go:89] found id: "ad54e02660b07cbde6d493c7d5e3ed172475b94a9eaee4e87d4bd9ef151c0b22"
	I1217 00:09:23.436368   32329 cri.go:89] found id: "45b944b097394c01a6a9f73b7481a21c99e516a6329609ae554a14bb17a1b0c4"
	I1217 00:09:23.436373   32329 cri.go:89] found id: "96a32ce198ee5fea679ca8aa4c1ec23792c97d3131fb348ba7d61057703f8b98"
	I1217 00:09:23.436376   32329 cri.go:89] found id: "6060f042efdda63b0e493f156e033554f81a8024186ecd9b75e89903f49cc5a6"
	I1217 00:09:23.436380   32329 cri.go:89] found id: "cfb270351b7717448c0caad78a981c83e200c3645e7ea23795af66b940e7f694"
	I1217 00:09:23.436382   32329 cri.go:89] found id: "60a76e334b179e66cc6937fdcf120c474c69221436fe9732a138ce177b409c81"
	I1217 00:09:23.436385   32329 cri.go:89] found id: "404a83db71038ead5c1b120d09189bcde64f22a00bc12c772da57ed50d0b4e31"
	I1217 00:09:23.436388   32329 cri.go:89] found id: "7477be1e8e83d8e93db214a41f2cbe2dac4702f15bafed422689a1ad41a282ee"
	I1217 00:09:23.436393   32329 cri.go:89] found id: "88b4569360ba98b30f65b36287d6c38a51676cebffa1785dc5414861aa1a0629"
	I1217 00:09:23.436396   32329 cri.go:89] found id: "7ad73ae76171d2d2105f4ccfe0862424948abc4be7d39a9f3d3999660c222211"
	I1217 00:09:23.436399   32329 cri.go:89] found id: "e77e53ca2e567bf130231e95ef7e993ca42c0bc61aab6ffced345e9e69c005cc"
	I1217 00:09:23.436401   32329 cri.go:89] found id: "d5d932c1082d35b313501328162c7f2f663374a7e3c58c4c2b1114359e9493df"
	I1217 00:09:23.436404   32329 cri.go:89] found id: "5d7bc94a6e7622d199da43ca8e9942b4e849ae76949d0554b9d548a510dd26ce"
	I1217 00:09:23.436407   32329 cri.go:89] found id: "b3c5366ec83c76413d2675b009b0c59e92e94121bf53dd83abb96b2fc0bd58b7"
	I1217 00:09:23.436414   32329 cri.go:89] found id: "dfd9e15edab91358da5ee7de7e20baaf2f8b820f9507af61b26dcbf0be9749ac"
	I1217 00:09:23.436419   32329 cri.go:89] found id: "a380c22257b5cfb547f66e134e244b2c1d6bd55bad431f846b76089ef28f6a89"
	I1217 00:09:23.436423   32329 cri.go:89] found id: "f6e58bb2900bb7013f6f81ccca2250cb1b6547be3edcc33d4bc867ae9d0b4072"
	I1217 00:09:23.436426   32329 cri.go:89] found id: "383049ced70e6adc7b2ef0d1a415cd527a2670898bfb2873c9b8955afffff3eb"
	I1217 00:09:23.436429   32329 cri.go:89] found id: "840301d1bb594051e430e85719c2707ed97013c7e3269f84012213ab768d9935"
	I1217 00:09:23.436432   32329 cri.go:89] found id: "950dc7c477829d5fc62b7e10ed2edf92016de18feb8bc6c8d8262fbf28097b78"
	I1217 00:09:23.436435   32329 cri.go:89] found id: "c85efeb6af746eaf16f9b1ef2458c5065693555ece6e3b595a07ccc7b8c2e6d9"
	I1217 00:09:23.436437   32329 cri.go:89] found id: "f55d4645a3da61635311b8471dafe61926de73f6c6575bcdc112a086cfde666a"
	I1217 00:09:23.436440   32329 cri.go:89] found id: "a9fb6926bb935bb35c23586c7a59d3ecc32fdac56e6508767e75f4b3b5db4340"
	I1217 00:09:23.436443   32329 cri.go:89] found id: "a5dab92a052f84df44c207e5dd5c238be41faadb22b92368fa8135c2af2fd265"
	I1217 00:09:23.436446   32329 cri.go:89] found id: ""
	I1217 00:09:23.436481   32329 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:09:23.449786   32329 out.go:203] 
	W1217 00:09:23.450831   32329 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:09:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:09:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:09:23.450846   32329 out.go:285] * 
	* 
	W1217 00:09:23.453902   32329 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:09:23.454929   32329 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-401977 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-401977 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-401977 addons disable ingress --alsologtostderr -v=1: exit status 11 (230.197177ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:09:23.513252   32391 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:09:23.513534   32391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:09:23.513546   32391 out.go:374] Setting ErrFile to fd 2...
	I1217 00:09:23.513552   32391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:09:23.513738   32391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:09:23.514024   32391 mustload.go:66] Loading cluster: addons-401977
	I1217 00:09:23.514371   32391 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:09:23.514393   32391 addons.go:622] checking whether the cluster is paused
	I1217 00:09:23.514488   32391 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:09:23.514502   32391 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:09:23.514844   32391 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:09:23.531743   32391 ssh_runner.go:195] Run: systemctl --version
	I1217 00:09:23.531807   32391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:09:23.548599   32391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:09:23.639323   32391 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:09:23.639381   32391 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:09:23.666698   32391 cri.go:89] found id: "7d625ea0827567c1ecf6072d9f705ee4a0e2896e259825605202bb193195e013"
	I1217 00:09:23.666716   32391 cri.go:89] found id: "ad54e02660b07cbde6d493c7d5e3ed172475b94a9eaee4e87d4bd9ef151c0b22"
	I1217 00:09:23.666721   32391 cri.go:89] found id: "45b944b097394c01a6a9f73b7481a21c99e516a6329609ae554a14bb17a1b0c4"
	I1217 00:09:23.666724   32391 cri.go:89] found id: "96a32ce198ee5fea679ca8aa4c1ec23792c97d3131fb348ba7d61057703f8b98"
	I1217 00:09:23.666727   32391 cri.go:89] found id: "6060f042efdda63b0e493f156e033554f81a8024186ecd9b75e89903f49cc5a6"
	I1217 00:09:23.666730   32391 cri.go:89] found id: "cfb270351b7717448c0caad78a981c83e200c3645e7ea23795af66b940e7f694"
	I1217 00:09:23.666733   32391 cri.go:89] found id: "60a76e334b179e66cc6937fdcf120c474c69221436fe9732a138ce177b409c81"
	I1217 00:09:23.666736   32391 cri.go:89] found id: "404a83db71038ead5c1b120d09189bcde64f22a00bc12c772da57ed50d0b4e31"
	I1217 00:09:23.666740   32391 cri.go:89] found id: "7477be1e8e83d8e93db214a41f2cbe2dac4702f15bafed422689a1ad41a282ee"
	I1217 00:09:23.666746   32391 cri.go:89] found id: "88b4569360ba98b30f65b36287d6c38a51676cebffa1785dc5414861aa1a0629"
	I1217 00:09:23.666758   32391 cri.go:89] found id: "7ad73ae76171d2d2105f4ccfe0862424948abc4be7d39a9f3d3999660c222211"
	I1217 00:09:23.666764   32391 cri.go:89] found id: "e77e53ca2e567bf130231e95ef7e993ca42c0bc61aab6ffced345e9e69c005cc"
	I1217 00:09:23.666773   32391 cri.go:89] found id: "d5d932c1082d35b313501328162c7f2f663374a7e3c58c4c2b1114359e9493df"
	I1217 00:09:23.666778   32391 cri.go:89] found id: "5d7bc94a6e7622d199da43ca8e9942b4e849ae76949d0554b9d548a510dd26ce"
	I1217 00:09:23.666786   32391 cri.go:89] found id: "b3c5366ec83c76413d2675b009b0c59e92e94121bf53dd83abb96b2fc0bd58b7"
	I1217 00:09:23.666803   32391 cri.go:89] found id: "dfd9e15edab91358da5ee7de7e20baaf2f8b820f9507af61b26dcbf0be9749ac"
	I1217 00:09:23.666812   32391 cri.go:89] found id: "a380c22257b5cfb547f66e134e244b2c1d6bd55bad431f846b76089ef28f6a89"
	I1217 00:09:23.666816   32391 cri.go:89] found id: "f6e58bb2900bb7013f6f81ccca2250cb1b6547be3edcc33d4bc867ae9d0b4072"
	I1217 00:09:23.666819   32391 cri.go:89] found id: "383049ced70e6adc7b2ef0d1a415cd527a2670898bfb2873c9b8955afffff3eb"
	I1217 00:09:23.666821   32391 cri.go:89] found id: "840301d1bb594051e430e85719c2707ed97013c7e3269f84012213ab768d9935"
	I1217 00:09:23.666832   32391 cri.go:89] found id: "950dc7c477829d5fc62b7e10ed2edf92016de18feb8bc6c8d8262fbf28097b78"
	I1217 00:09:23.666837   32391 cri.go:89] found id: "c85efeb6af746eaf16f9b1ef2458c5065693555ece6e3b595a07ccc7b8c2e6d9"
	I1217 00:09:23.666840   32391 cri.go:89] found id: "f55d4645a3da61635311b8471dafe61926de73f6c6575bcdc112a086cfde666a"
	I1217 00:09:23.666846   32391 cri.go:89] found id: "a9fb6926bb935bb35c23586c7a59d3ecc32fdac56e6508767e75f4b3b5db4340"
	I1217 00:09:23.666860   32391 cri.go:89] found id: "a5dab92a052f84df44c207e5dd5c238be41faadb22b92368fa8135c2af2fd265"
	I1217 00:09:23.666865   32391 cri.go:89] found id: ""
	I1217 00:09:23.666902   32391 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:09:23.680493   32391 out.go:203] 
	W1217 00:09:23.681618   32391 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:09:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:09:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:09:23.681641   32391 out.go:285] * 
	* 
	W1217 00:09:23.684509   32391 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:09:23.685677   32391 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-401977 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (147.01s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-8kklk" [11b6e908-e12a-4cad-902c-ef57c5a50d17] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003551851s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-401977 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-401977 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (230.367737ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:07:04.604447   28157 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:07:04.604566   28157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:07:04.604574   28157 out.go:374] Setting ErrFile to fd 2...
	I1217 00:07:04.604578   28157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:07:04.604762   28157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:07:04.605002   28157 mustload.go:66] Loading cluster: addons-401977
	I1217 00:07:04.605298   28157 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:07:04.605316   28157 addons.go:622] checking whether the cluster is paused
	I1217 00:07:04.605392   28157 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:07:04.605403   28157 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:07:04.605724   28157 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:07:04.623038   28157 ssh_runner.go:195] Run: systemctl --version
	I1217 00:07:04.623082   28157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:07:04.639649   28157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:07:04.731664   28157 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:07:04.731751   28157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:07:04.759531   28157 cri.go:89] found id: "ad54e02660b07cbde6d493c7d5e3ed172475b94a9eaee4e87d4bd9ef151c0b22"
	I1217 00:07:04.759549   28157 cri.go:89] found id: "45b944b097394c01a6a9f73b7481a21c99e516a6329609ae554a14bb17a1b0c4"
	I1217 00:07:04.759564   28157 cri.go:89] found id: "96a32ce198ee5fea679ca8aa4c1ec23792c97d3131fb348ba7d61057703f8b98"
	I1217 00:07:04.759569   28157 cri.go:89] found id: "6060f042efdda63b0e493f156e033554f81a8024186ecd9b75e89903f49cc5a6"
	I1217 00:07:04.759573   28157 cri.go:89] found id: "cfb270351b7717448c0caad78a981c83e200c3645e7ea23795af66b940e7f694"
	I1217 00:07:04.759577   28157 cri.go:89] found id: "60a76e334b179e66cc6937fdcf120c474c69221436fe9732a138ce177b409c81"
	I1217 00:07:04.759579   28157 cri.go:89] found id: "404a83db71038ead5c1b120d09189bcde64f22a00bc12c772da57ed50d0b4e31"
	I1217 00:07:04.759582   28157 cri.go:89] found id: "7477be1e8e83d8e93db214a41f2cbe2dac4702f15bafed422689a1ad41a282ee"
	I1217 00:07:04.759585   28157 cri.go:89] found id: "88b4569360ba98b30f65b36287d6c38a51676cebffa1785dc5414861aa1a0629"
	I1217 00:07:04.759590   28157 cri.go:89] found id: "7ad73ae76171d2d2105f4ccfe0862424948abc4be7d39a9f3d3999660c222211"
	I1217 00:07:04.759592   28157 cri.go:89] found id: "e77e53ca2e567bf130231e95ef7e993ca42c0bc61aab6ffced345e9e69c005cc"
	I1217 00:07:04.759595   28157 cri.go:89] found id: "d5d932c1082d35b313501328162c7f2f663374a7e3c58c4c2b1114359e9493df"
	I1217 00:07:04.759598   28157 cri.go:89] found id: "5d7bc94a6e7622d199da43ca8e9942b4e849ae76949d0554b9d548a510dd26ce"
	I1217 00:07:04.759601   28157 cri.go:89] found id: "b3c5366ec83c76413d2675b009b0c59e92e94121bf53dd83abb96b2fc0bd58b7"
	I1217 00:07:04.759604   28157 cri.go:89] found id: "dfd9e15edab91358da5ee7de7e20baaf2f8b820f9507af61b26dcbf0be9749ac"
	I1217 00:07:04.759608   28157 cri.go:89] found id: "a380c22257b5cfb547f66e134e244b2c1d6bd55bad431f846b76089ef28f6a89"
	I1217 00:07:04.759614   28157 cri.go:89] found id: "f6e58bb2900bb7013f6f81ccca2250cb1b6547be3edcc33d4bc867ae9d0b4072"
	I1217 00:07:04.759619   28157 cri.go:89] found id: "383049ced70e6adc7b2ef0d1a415cd527a2670898bfb2873c9b8955afffff3eb"
	I1217 00:07:04.759627   28157 cri.go:89] found id: "840301d1bb594051e430e85719c2707ed97013c7e3269f84012213ab768d9935"
	I1217 00:07:04.759630   28157 cri.go:89] found id: "950dc7c477829d5fc62b7e10ed2edf92016de18feb8bc6c8d8262fbf28097b78"
	I1217 00:07:04.759635   28157 cri.go:89] found id: "c85efeb6af746eaf16f9b1ef2458c5065693555ece6e3b595a07ccc7b8c2e6d9"
	I1217 00:07:04.759637   28157 cri.go:89] found id: "f55d4645a3da61635311b8471dafe61926de73f6c6575bcdc112a086cfde666a"
	I1217 00:07:04.759640   28157 cri.go:89] found id: "a9fb6926bb935bb35c23586c7a59d3ecc32fdac56e6508767e75f4b3b5db4340"
	I1217 00:07:04.759642   28157 cri.go:89] found id: "a5dab92a052f84df44c207e5dd5c238be41faadb22b92368fa8135c2af2fd265"
	I1217 00:07:04.759645   28157 cri.go:89] found id: ""
	I1217 00:07:04.759683   28157 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:07:04.773575   28157 out.go:203] 
	W1217 00:07:04.774699   28157 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:07:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:07:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:07:04.774724   28157 out.go:285] * 
	* 
	W1217 00:07:04.777975   28157 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:07:04.779399   28157 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-401977 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.063405ms
I1217 00:06:54.248183   16354 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1217 00:06:54.248205   16354 kapi.go:107] duration metric: took 3.539752ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-krz87" [e7e57a4b-dfdd-48e7-93e6-72b817b73907] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003585138s
addons_test.go:465: (dbg) Run:  kubectl --context addons-401977 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-401977 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-401977 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (241.100089ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:06:59.363163   27694 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:06:59.363801   27694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:06:59.363810   27694 out.go:374] Setting ErrFile to fd 2...
	I1217 00:06:59.363815   27694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:06:59.364036   27694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:06:59.364271   27694 mustload.go:66] Loading cluster: addons-401977
	I1217 00:06:59.364552   27694 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:06:59.364569   27694 addons.go:622] checking whether the cluster is paused
	I1217 00:06:59.364644   27694 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:06:59.364655   27694 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:06:59.365007   27694 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:06:59.383471   27694 ssh_runner.go:195] Run: systemctl --version
	I1217 00:06:59.383522   27694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:06:59.403548   27694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:06:59.497183   27694 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:06:59.497259   27694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:06:59.525767   27694 cri.go:89] found id: "ad54e02660b07cbde6d493c7d5e3ed172475b94a9eaee4e87d4bd9ef151c0b22"
	I1217 00:06:59.525786   27694 cri.go:89] found id: "45b944b097394c01a6a9f73b7481a21c99e516a6329609ae554a14bb17a1b0c4"
	I1217 00:06:59.525792   27694 cri.go:89] found id: "96a32ce198ee5fea679ca8aa4c1ec23792c97d3131fb348ba7d61057703f8b98"
	I1217 00:06:59.525795   27694 cri.go:89] found id: "6060f042efdda63b0e493f156e033554f81a8024186ecd9b75e89903f49cc5a6"
	I1217 00:06:59.525798   27694 cri.go:89] found id: "cfb270351b7717448c0caad78a981c83e200c3645e7ea23795af66b940e7f694"
	I1217 00:06:59.525801   27694 cri.go:89] found id: "60a76e334b179e66cc6937fdcf120c474c69221436fe9732a138ce177b409c81"
	I1217 00:06:59.525803   27694 cri.go:89] found id: "404a83db71038ead5c1b120d09189bcde64f22a00bc12c772da57ed50d0b4e31"
	I1217 00:06:59.525807   27694 cri.go:89] found id: "7477be1e8e83d8e93db214a41f2cbe2dac4702f15bafed422689a1ad41a282ee"
	I1217 00:06:59.525812   27694 cri.go:89] found id: "88b4569360ba98b30f65b36287d6c38a51676cebffa1785dc5414861aa1a0629"
	I1217 00:06:59.525827   27694 cri.go:89] found id: "7ad73ae76171d2d2105f4ccfe0862424948abc4be7d39a9f3d3999660c222211"
	I1217 00:06:59.525832   27694 cri.go:89] found id: "e77e53ca2e567bf130231e95ef7e993ca42c0bc61aab6ffced345e9e69c005cc"
	I1217 00:06:59.525837   27694 cri.go:89] found id: "d5d932c1082d35b313501328162c7f2f663374a7e3c58c4c2b1114359e9493df"
	I1217 00:06:59.525846   27694 cri.go:89] found id: "5d7bc94a6e7622d199da43ca8e9942b4e849ae76949d0554b9d548a510dd26ce"
	I1217 00:06:59.525851   27694 cri.go:89] found id: "b3c5366ec83c76413d2675b009b0c59e92e94121bf53dd83abb96b2fc0bd58b7"
	I1217 00:06:59.525858   27694 cri.go:89] found id: "dfd9e15edab91358da5ee7de7e20baaf2f8b820f9507af61b26dcbf0be9749ac"
	I1217 00:06:59.525867   27694 cri.go:89] found id: "a380c22257b5cfb547f66e134e244b2c1d6bd55bad431f846b76089ef28f6a89"
	I1217 00:06:59.525873   27694 cri.go:89] found id: "f6e58bb2900bb7013f6f81ccca2250cb1b6547be3edcc33d4bc867ae9d0b4072"
	I1217 00:06:59.525877   27694 cri.go:89] found id: "383049ced70e6adc7b2ef0d1a415cd527a2670898bfb2873c9b8955afffff3eb"
	I1217 00:06:59.525880   27694 cri.go:89] found id: "840301d1bb594051e430e85719c2707ed97013c7e3269f84012213ab768d9935"
	I1217 00:06:59.525883   27694 cri.go:89] found id: "950dc7c477829d5fc62b7e10ed2edf92016de18feb8bc6c8d8262fbf28097b78"
	I1217 00:06:59.525885   27694 cri.go:89] found id: "c85efeb6af746eaf16f9b1ef2458c5065693555ece6e3b595a07ccc7b8c2e6d9"
	I1217 00:06:59.525888   27694 cri.go:89] found id: "f55d4645a3da61635311b8471dafe61926de73f6c6575bcdc112a086cfde666a"
	I1217 00:06:59.525891   27694 cri.go:89] found id: "a9fb6926bb935bb35c23586c7a59d3ecc32fdac56e6508767e75f4b3b5db4340"
	I1217 00:06:59.525894   27694 cri.go:89] found id: "a5dab92a052f84df44c207e5dd5c238be41faadb22b92368fa8135c2af2fd265"
	I1217 00:06:59.525897   27694 cri.go:89] found id: ""
	I1217 00:06:59.525940   27694 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:06:59.539359   27694 out.go:203] 
	W1217 00:06:59.540475   27694 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:06:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:06:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:06:59.540500   27694 out.go:285] * 
	* 
	W1217 00:06:59.543816   27694 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:06:59.544889   27694 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-401977 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.02s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1217 00:06:54.244679   16354 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.549315ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-401977 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-401977 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [d0e76bfc-42bb-4b16-9a01-7cc50c461cd2] Pending
helpers_test.go:353: "task-pv-pod" [d0e76bfc-42bb-4b16-9a01-7cc50c461cd2] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 6.00382954s
addons_test.go:574: (dbg) Run:  kubectl --context addons-401977 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-401977 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-401977 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-401977 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-401977 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-401977 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-401977 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [9cd585f4-62dd-4d3c-a9ad-df02a7c0d6d0] Pending
helpers_test.go:353: "task-pv-pod-restore" [9cd585f4-62dd-4d3c-a9ad-df02a7c0d6d0] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003451055s
addons_test.go:616: (dbg) Run:  kubectl --context addons-401977 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-401977 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-401977 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-401977 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-401977 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (229.931124ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:07:36.851925   30209 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:07:36.852331   30209 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:07:36.852343   30209 out.go:374] Setting ErrFile to fd 2...
	I1217 00:07:36.852349   30209 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:07:36.852536   30209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:07:36.852805   30209 mustload.go:66] Loading cluster: addons-401977
	I1217 00:07:36.853139   30209 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:07:36.853163   30209 addons.go:622] checking whether the cluster is paused
	I1217 00:07:36.853256   30209 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:07:36.853272   30209 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:07:36.853674   30209 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:07:36.871831   30209 ssh_runner.go:195] Run: systemctl --version
	I1217 00:07:36.871878   30209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:07:36.888723   30209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:07:36.980118   30209 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:07:36.980190   30209 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:07:37.007878   30209 cri.go:89] found id: "7d625ea0827567c1ecf6072d9f705ee4a0e2896e259825605202bb193195e013"
	I1217 00:07:37.007898   30209 cri.go:89] found id: "ad54e02660b07cbde6d493c7d5e3ed172475b94a9eaee4e87d4bd9ef151c0b22"
	I1217 00:07:37.007903   30209 cri.go:89] found id: "45b944b097394c01a6a9f73b7481a21c99e516a6329609ae554a14bb17a1b0c4"
	I1217 00:07:37.007908   30209 cri.go:89] found id: "96a32ce198ee5fea679ca8aa4c1ec23792c97d3131fb348ba7d61057703f8b98"
	I1217 00:07:37.007912   30209 cri.go:89] found id: "6060f042efdda63b0e493f156e033554f81a8024186ecd9b75e89903f49cc5a6"
	I1217 00:07:37.007917   30209 cri.go:89] found id: "cfb270351b7717448c0caad78a981c83e200c3645e7ea23795af66b940e7f694"
	I1217 00:07:37.007921   30209 cri.go:89] found id: "60a76e334b179e66cc6937fdcf120c474c69221436fe9732a138ce177b409c81"
	I1217 00:07:37.007926   30209 cri.go:89] found id: "404a83db71038ead5c1b120d09189bcde64f22a00bc12c772da57ed50d0b4e31"
	I1217 00:07:37.007930   30209 cri.go:89] found id: "7477be1e8e83d8e93db214a41f2cbe2dac4702f15bafed422689a1ad41a282ee"
	I1217 00:07:37.007943   30209 cri.go:89] found id: "88b4569360ba98b30f65b36287d6c38a51676cebffa1785dc5414861aa1a0629"
	I1217 00:07:37.007951   30209 cri.go:89] found id: "7ad73ae76171d2d2105f4ccfe0862424948abc4be7d39a9f3d3999660c222211"
	I1217 00:07:37.007957   30209 cri.go:89] found id: "e77e53ca2e567bf130231e95ef7e993ca42c0bc61aab6ffced345e9e69c005cc"
	I1217 00:07:37.007964   30209 cri.go:89] found id: "d5d932c1082d35b313501328162c7f2f663374a7e3c58c4c2b1114359e9493df"
	I1217 00:07:37.007970   30209 cri.go:89] found id: "5d7bc94a6e7622d199da43ca8e9942b4e849ae76949d0554b9d548a510dd26ce"
	I1217 00:07:37.007977   30209 cri.go:89] found id: "b3c5366ec83c76413d2675b009b0c59e92e94121bf53dd83abb96b2fc0bd58b7"
	I1217 00:07:37.007985   30209 cri.go:89] found id: "dfd9e15edab91358da5ee7de7e20baaf2f8b820f9507af61b26dcbf0be9749ac"
	I1217 00:07:37.008002   30209 cri.go:89] found id: "a380c22257b5cfb547f66e134e244b2c1d6bd55bad431f846b76089ef28f6a89"
	I1217 00:07:37.008010   30209 cri.go:89] found id: "f6e58bb2900bb7013f6f81ccca2250cb1b6547be3edcc33d4bc867ae9d0b4072"
	I1217 00:07:37.008019   30209 cri.go:89] found id: "383049ced70e6adc7b2ef0d1a415cd527a2670898bfb2873c9b8955afffff3eb"
	I1217 00:07:37.008023   30209 cri.go:89] found id: "840301d1bb594051e430e85719c2707ed97013c7e3269f84012213ab768d9935"
	I1217 00:07:37.008027   30209 cri.go:89] found id: "950dc7c477829d5fc62b7e10ed2edf92016de18feb8bc6c8d8262fbf28097b78"
	I1217 00:07:37.008030   30209 cri.go:89] found id: "c85efeb6af746eaf16f9b1ef2458c5065693555ece6e3b595a07ccc7b8c2e6d9"
	I1217 00:07:37.008034   30209 cri.go:89] found id: "f55d4645a3da61635311b8471dafe61926de73f6c6575bcdc112a086cfde666a"
	I1217 00:07:37.008038   30209 cri.go:89] found id: "a9fb6926bb935bb35c23586c7a59d3ecc32fdac56e6508767e75f4b3b5db4340"
	I1217 00:07:37.008043   30209 cri.go:89] found id: "a5dab92a052f84df44c207e5dd5c238be41faadb22b92368fa8135c2af2fd265"
	I1217 00:07:37.008049   30209 cri.go:89] found id: ""
	I1217 00:07:37.008094   30209 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:07:37.021498   30209 out.go:203] 
	W1217 00:07:37.022680   30209 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:07:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:07:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:07:37.022696   30209 out.go:285] * 
	* 
	W1217 00:07:37.025603   30209 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:07:37.026772   30209 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-401977 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-401977 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-401977 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (227.130202ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:07:37.082271   30271 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:07:37.082532   30271 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:07:37.082541   30271 out.go:374] Setting ErrFile to fd 2...
	I1217 00:07:37.082546   30271 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:07:37.082748   30271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:07:37.083009   30271 mustload.go:66] Loading cluster: addons-401977
	I1217 00:07:37.083441   30271 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:07:37.083460   30271 addons.go:622] checking whether the cluster is paused
	I1217 00:07:37.083537   30271 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:07:37.083549   30271 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:07:37.083890   30271 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:07:37.100985   30271 ssh_runner.go:195] Run: systemctl --version
	I1217 00:07:37.101066   30271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:07:37.117378   30271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:07:37.207659   30271 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:07:37.207753   30271 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:07:37.236101   30271 cri.go:89] found id: "7d625ea0827567c1ecf6072d9f705ee4a0e2896e259825605202bb193195e013"
	I1217 00:07:37.236122   30271 cri.go:89] found id: "ad54e02660b07cbde6d493c7d5e3ed172475b94a9eaee4e87d4bd9ef151c0b22"
	I1217 00:07:37.236127   30271 cri.go:89] found id: "45b944b097394c01a6a9f73b7481a21c99e516a6329609ae554a14bb17a1b0c4"
	I1217 00:07:37.236132   30271 cri.go:89] found id: "96a32ce198ee5fea679ca8aa4c1ec23792c97d3131fb348ba7d61057703f8b98"
	I1217 00:07:37.236136   30271 cri.go:89] found id: "6060f042efdda63b0e493f156e033554f81a8024186ecd9b75e89903f49cc5a6"
	I1217 00:07:37.236141   30271 cri.go:89] found id: "cfb270351b7717448c0caad78a981c83e200c3645e7ea23795af66b940e7f694"
	I1217 00:07:37.236146   30271 cri.go:89] found id: "60a76e334b179e66cc6937fdcf120c474c69221436fe9732a138ce177b409c81"
	I1217 00:07:37.236149   30271 cri.go:89] found id: "404a83db71038ead5c1b120d09189bcde64f22a00bc12c772da57ed50d0b4e31"
	I1217 00:07:37.236154   30271 cri.go:89] found id: "7477be1e8e83d8e93db214a41f2cbe2dac4702f15bafed422689a1ad41a282ee"
	I1217 00:07:37.236171   30271 cri.go:89] found id: "88b4569360ba98b30f65b36287d6c38a51676cebffa1785dc5414861aa1a0629"
	I1217 00:07:37.236179   30271 cri.go:89] found id: "7ad73ae76171d2d2105f4ccfe0862424948abc4be7d39a9f3d3999660c222211"
	I1217 00:07:37.236185   30271 cri.go:89] found id: "e77e53ca2e567bf130231e95ef7e993ca42c0bc61aab6ffced345e9e69c005cc"
	I1217 00:07:37.236193   30271 cri.go:89] found id: "d5d932c1082d35b313501328162c7f2f663374a7e3c58c4c2b1114359e9493df"
	I1217 00:07:37.236199   30271 cri.go:89] found id: "5d7bc94a6e7622d199da43ca8e9942b4e849ae76949d0554b9d548a510dd26ce"
	I1217 00:07:37.236219   30271 cri.go:89] found id: "b3c5366ec83c76413d2675b009b0c59e92e94121bf53dd83abb96b2fc0bd58b7"
	I1217 00:07:37.236238   30271 cri.go:89] found id: "dfd9e15edab91358da5ee7de7e20baaf2f8b820f9507af61b26dcbf0be9749ac"
	I1217 00:07:37.236246   30271 cri.go:89] found id: "a380c22257b5cfb547f66e134e244b2c1d6bd55bad431f846b76089ef28f6a89"
	I1217 00:07:37.236254   30271 cri.go:89] found id: "f6e58bb2900bb7013f6f81ccca2250cb1b6547be3edcc33d4bc867ae9d0b4072"
	I1217 00:07:37.236257   30271 cri.go:89] found id: "383049ced70e6adc7b2ef0d1a415cd527a2670898bfb2873c9b8955afffff3eb"
	I1217 00:07:37.236262   30271 cri.go:89] found id: "840301d1bb594051e430e85719c2707ed97013c7e3269f84012213ab768d9935"
	I1217 00:07:37.236267   30271 cri.go:89] found id: "950dc7c477829d5fc62b7e10ed2edf92016de18feb8bc6c8d8262fbf28097b78"
	I1217 00:07:37.236272   30271 cri.go:89] found id: "c85efeb6af746eaf16f9b1ef2458c5065693555ece6e3b595a07ccc7b8c2e6d9"
	I1217 00:07:37.236281   30271 cri.go:89] found id: "f55d4645a3da61635311b8471dafe61926de73f6c6575bcdc112a086cfde666a"
	I1217 00:07:37.236287   30271 cri.go:89] found id: "a9fb6926bb935bb35c23586c7a59d3ecc32fdac56e6508767e75f4b3b5db4340"
	I1217 00:07:37.236294   30271 cri.go:89] found id: "a5dab92a052f84df44c207e5dd5c238be41faadb22b92368fa8135c2af2fd265"
	I1217 00:07:37.236300   30271 cri.go:89] found id: ""
	I1217 00:07:37.236347   30271 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:07:37.249711   30271 out.go:203] 
	W1217 00:07:37.250783   30271 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:07:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:07:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:07:37.250800   30271 out.go:285] * 
	* 
	W1217 00:07:37.253675   30271 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:07:37.254670   30271 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-401977 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (43.02s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-401977 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-401977 --alsologtostderr -v=1: exit status 11 (243.615583ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:06:54.301782   26283 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:06:54.301929   26283 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:06:54.301936   26283 out.go:374] Setting ErrFile to fd 2...
	I1217 00:06:54.301942   26283 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:06:54.302239   26283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:06:54.302594   26283 mustload.go:66] Loading cluster: addons-401977
	I1217 00:06:54.302926   26283 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:06:54.302948   26283 addons.go:622] checking whether the cluster is paused
	I1217 00:06:54.303070   26283 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:06:54.303087   26283 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:06:54.303507   26283 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:06:54.323081   26283 ssh_runner.go:195] Run: systemctl --version
	I1217 00:06:54.323143   26283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:06:54.341076   26283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:06:54.435437   26283 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:06:54.435523   26283 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:06:54.463180   26283 cri.go:89] found id: "ad54e02660b07cbde6d493c7d5e3ed172475b94a9eaee4e87d4bd9ef151c0b22"
	I1217 00:06:54.463200   26283 cri.go:89] found id: "45b944b097394c01a6a9f73b7481a21c99e516a6329609ae554a14bb17a1b0c4"
	I1217 00:06:54.463204   26283 cri.go:89] found id: "96a32ce198ee5fea679ca8aa4c1ec23792c97d3131fb348ba7d61057703f8b98"
	I1217 00:06:54.463207   26283 cri.go:89] found id: "6060f042efdda63b0e493f156e033554f81a8024186ecd9b75e89903f49cc5a6"
	I1217 00:06:54.463210   26283 cri.go:89] found id: "cfb270351b7717448c0caad78a981c83e200c3645e7ea23795af66b940e7f694"
	I1217 00:06:54.463213   26283 cri.go:89] found id: "60a76e334b179e66cc6937fdcf120c474c69221436fe9732a138ce177b409c81"
	I1217 00:06:54.463216   26283 cri.go:89] found id: "404a83db71038ead5c1b120d09189bcde64f22a00bc12c772da57ed50d0b4e31"
	I1217 00:06:54.463218   26283 cri.go:89] found id: "7477be1e8e83d8e93db214a41f2cbe2dac4702f15bafed422689a1ad41a282ee"
	I1217 00:06:54.463221   26283 cri.go:89] found id: "88b4569360ba98b30f65b36287d6c38a51676cebffa1785dc5414861aa1a0629"
	I1217 00:06:54.463226   26283 cri.go:89] found id: "7ad73ae76171d2d2105f4ccfe0862424948abc4be7d39a9f3d3999660c222211"
	I1217 00:06:54.463229   26283 cri.go:89] found id: "e77e53ca2e567bf130231e95ef7e993ca42c0bc61aab6ffced345e9e69c005cc"
	I1217 00:06:54.463231   26283 cri.go:89] found id: "d5d932c1082d35b313501328162c7f2f663374a7e3c58c4c2b1114359e9493df"
	I1217 00:06:54.463234   26283 cri.go:89] found id: "5d7bc94a6e7622d199da43ca8e9942b4e849ae76949d0554b9d548a510dd26ce"
	I1217 00:06:54.463247   26283 cri.go:89] found id: "b3c5366ec83c76413d2675b009b0c59e92e94121bf53dd83abb96b2fc0bd58b7"
	I1217 00:06:54.463252   26283 cri.go:89] found id: "dfd9e15edab91358da5ee7de7e20baaf2f8b820f9507af61b26dcbf0be9749ac"
	I1217 00:06:54.463264   26283 cri.go:89] found id: "a380c22257b5cfb547f66e134e244b2c1d6bd55bad431f846b76089ef28f6a89"
	I1217 00:06:54.463272   26283 cri.go:89] found id: "f6e58bb2900bb7013f6f81ccca2250cb1b6547be3edcc33d4bc867ae9d0b4072"
	I1217 00:06:54.463275   26283 cri.go:89] found id: "383049ced70e6adc7b2ef0d1a415cd527a2670898bfb2873c9b8955afffff3eb"
	I1217 00:06:54.463278   26283 cri.go:89] found id: "840301d1bb594051e430e85719c2707ed97013c7e3269f84012213ab768d9935"
	I1217 00:06:54.463280   26283 cri.go:89] found id: "950dc7c477829d5fc62b7e10ed2edf92016de18feb8bc6c8d8262fbf28097b78"
	I1217 00:06:54.463286   26283 cri.go:89] found id: "c85efeb6af746eaf16f9b1ef2458c5065693555ece6e3b595a07ccc7b8c2e6d9"
	I1217 00:06:54.463289   26283 cri.go:89] found id: "f55d4645a3da61635311b8471dafe61926de73f6c6575bcdc112a086cfde666a"
	I1217 00:06:54.463291   26283 cri.go:89] found id: "a9fb6926bb935bb35c23586c7a59d3ecc32fdac56e6508767e75f4b3b5db4340"
	I1217 00:06:54.463294   26283 cri.go:89] found id: "a5dab92a052f84df44c207e5dd5c238be41faadb22b92368fa8135c2af2fd265"
	I1217 00:06:54.463296   26283 cri.go:89] found id: ""
	I1217 00:06:54.463330   26283 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:06:54.477003   26283 out.go:203] 
	W1217 00:06:54.478315   26283 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:06:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:06:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:06:54.478343   26283 out.go:285] * 
	* 
	W1217 00:06:54.481226   26283 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:06:54.482569   26283 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-401977 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-401977
helpers_test.go:244: (dbg) docker inspect addons-401977:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "219e112c500a63f9336b2666157863d4cfe597753815d1bf3cb7dc7b0552a566",
	        "Created": "2025-12-17T00:05:07.512571798Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 18755,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:05:07.543088192Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/219e112c500a63f9336b2666157863d4cfe597753815d1bf3cb7dc7b0552a566/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/219e112c500a63f9336b2666157863d4cfe597753815d1bf3cb7dc7b0552a566/hostname",
	        "HostsPath": "/var/lib/docker/containers/219e112c500a63f9336b2666157863d4cfe597753815d1bf3cb7dc7b0552a566/hosts",
	        "LogPath": "/var/lib/docker/containers/219e112c500a63f9336b2666157863d4cfe597753815d1bf3cb7dc7b0552a566/219e112c500a63f9336b2666157863d4cfe597753815d1bf3cb7dc7b0552a566-json.log",
	        "Name": "/addons-401977",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-401977:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-401977",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "219e112c500a63f9336b2666157863d4cfe597753815d1bf3cb7dc7b0552a566",
	                "LowerDir": "/var/lib/docker/overlay2/2b92b8898f7a98811215bd838566dddb1002cf7f5fcff05d32154ccc0b9fec51-init/diff:/var/lib/docker/overlay2/594b812fd6d8db89dab322ea9e00d43dd555e9709fb5e6953e3873cce717392c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2b92b8898f7a98811215bd838566dddb1002cf7f5fcff05d32154ccc0b9fec51/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2b92b8898f7a98811215bd838566dddb1002cf7f5fcff05d32154ccc0b9fec51/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2b92b8898f7a98811215bd838566dddb1002cf7f5fcff05d32154ccc0b9fec51/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-401977",
	                "Source": "/var/lib/docker/volumes/addons-401977/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-401977",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-401977",
	                "name.minikube.sigs.k8s.io": "addons-401977",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9785f6653fa784d761c28e50fadc7c676d001811f421ddb0a68f5cdc441e1c28",
	            "SandboxKey": "/var/run/docker/netns/9785f6653fa7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-401977": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d27b277831966c02ef98dd516e15594caf20e2a10cfc9f62c3b9efd8d57b5104",
	                    "EndpointID": "8e84a2d47fdb52334c0de6e1447539e385c2616740d34e464939f0b8856e1892",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "6a:ff:fb:40:c5:84",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-401977",
	                        "219e112c500a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-401977 -n addons-401977
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-401977 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-401977 logs -n 25: (1.078826406s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-717461 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-717461   │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │ 17 Dec 25 00:04 UTC │
	│ delete  │ -p download-only-717461                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-717461   │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │ 17 Dec 25 00:04 UTC │
	│ start   │ -o=json --download-only -p download-only-618348 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-618348   │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │ 17 Dec 25 00:04 UTC │
	│ delete  │ -p download-only-618348                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-618348   │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │ 17 Dec 25 00:04 UTC │
	│ start   │ -o=json --download-only -p download-only-928929 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-928929   │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │ 17 Dec 25 00:04 UTC │
	│ delete  │ -p download-only-928929                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-928929   │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │ 17 Dec 25 00:04 UTC │
	│ delete  │ -p download-only-717461                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-717461   │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │ 17 Dec 25 00:04 UTC │
	│ delete  │ -p download-only-618348                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-618348   │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │ 17 Dec 25 00:04 UTC │
	│ delete  │ -p download-only-928929                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-928929   │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │ 17 Dec 25 00:04 UTC │
	│ start   │ --download-only -p download-docker-275762 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-275762 │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │                     │
	│ delete  │ -p download-docker-275762                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-275762 │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │ 17 Dec 25 00:04 UTC │
	│ start   │ --download-only -p binary-mirror-602188 --alsologtostderr --binary-mirror http://127.0.0.1:39411 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-602188   │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │                     │
	│ delete  │ -p binary-mirror-602188                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-602188   │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │ 17 Dec 25 00:04 UTC │
	│ addons  │ enable dashboard -p addons-401977                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-401977          │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │                     │
	│ addons  │ disable dashboard -p addons-401977                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-401977          │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │                     │
	│ start   │ -p addons-401977 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-401977          │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │ 17 Dec 25 00:06 UTC │
	│ addons  │ addons-401977 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-401977          │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │                     │
	│ addons  │ addons-401977 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-401977          │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │                     │
	│ addons  │ enable headlamp -p addons-401977 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-401977          │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:04:44
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:04:44.708836   18113 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:04:44.709100   18113 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:04:44.709110   18113 out.go:374] Setting ErrFile to fd 2...
	I1217 00:04:44.709114   18113 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:04:44.709301   18113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:04:44.709757   18113 out.go:368] Setting JSON to false
	I1217 00:04:44.710564   18113 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2835,"bootTime":1765927050,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:04:44.710613   18113 start.go:143] virtualization: kvm guest
	I1217 00:04:44.712392   18113 out.go:179] * [addons-401977] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:04:44.713943   18113 notify.go:221] Checking for updates...
	I1217 00:04:44.713953   18113 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:04:44.715239   18113 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:04:44.716596   18113 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:04:44.717964   18113 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:04:44.719149   18113 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:04:44.720344   18113 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:04:44.721583   18113 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:04:44.743479   18113 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:04:44.743619   18113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:04:44.794226   18113 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-17 00:04:44.784960384 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:04:44.794358   18113 docker.go:319] overlay module found
	I1217 00:04:44.796075   18113 out.go:179] * Using the docker driver based on user configuration
	I1217 00:04:44.797251   18113 start.go:309] selected driver: docker
	I1217 00:04:44.797265   18113 start.go:927] validating driver "docker" against <nil>
	I1217 00:04:44.797275   18113 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:04:44.797826   18113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:04:44.848741   18113 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-17 00:04:44.840098046 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:04:44.848967   18113 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:04:44.849183   18113 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:04:44.850838   18113 out.go:179] * Using Docker driver with root privileges
	I1217 00:04:44.851955   18113 cni.go:84] Creating CNI manager for ""
	I1217 00:04:44.852029   18113 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:04:44.852040   18113 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 00:04:44.852098   18113 start.go:353] cluster config:
	{Name:addons-401977 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-401977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1217 00:04:44.853505   18113 out.go:179] * Starting "addons-401977" primary control-plane node in "addons-401977" cluster
	I1217 00:04:44.854637   18113 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:04:44.855973   18113 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:04:44.857141   18113 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:04:44.857169   18113 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1217 00:04:44.857178   18113 cache.go:65] Caching tarball of preloaded images
	I1217 00:04:44.857220   18113 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:04:44.857268   18113 preload.go:238] Found /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 00:04:44.857279   18113 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 00:04:44.857597   18113 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/config.json ...
	I1217 00:04:44.857621   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/config.json: {Name:mka1ac6724c3ce75414158b232e8956807c75e7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:04:44.872732   18113 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 00:04:44.872854   18113 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 00:04:44.872872   18113 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1217 00:04:44.872878   18113 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1217 00:04:44.872887   18113 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1217 00:04:44.872897   18113 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from local cache
	I1217 00:04:56.843697   18113 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from cached tarball
	I1217 00:04:56.843733   18113 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:04:56.843770   18113 start.go:360] acquireMachinesLock for addons-401977: {Name:mk469783e29eb0a81971ed75239211715445c9d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:04:56.843862   18113 start.go:364] duration metric: took 74.89µs to acquireMachinesLock for "addons-401977"
	I1217 00:04:56.843889   18113 start.go:93] Provisioning new machine with config: &{Name:addons-401977 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-401977 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:04:56.843956   18113 start.go:125] createHost starting for "" (driver="docker")
	I1217 00:04:56.845696   18113 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1217 00:04:56.845934   18113 start.go:159] libmachine.API.Create for "addons-401977" (driver="docker")
	I1217 00:04:56.845964   18113 client.go:173] LocalClient.Create starting
	I1217 00:04:56.846096   18113 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem
	I1217 00:04:56.947740   18113 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem
	I1217 00:04:56.986529   18113 cli_runner.go:164] Run: docker network inspect addons-401977 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 00:04:57.004429   18113 cli_runner.go:211] docker network inspect addons-401977 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 00:04:57.004499   18113 network_create.go:284] running [docker network inspect addons-401977] to gather additional debugging logs...
	I1217 00:04:57.004517   18113 cli_runner.go:164] Run: docker network inspect addons-401977
	W1217 00:04:57.019094   18113 cli_runner.go:211] docker network inspect addons-401977 returned with exit code 1
	I1217 00:04:57.019120   18113 network_create.go:287] error running [docker network inspect addons-401977]: docker network inspect addons-401977: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-401977 not found
	I1217 00:04:57.019145   18113 network_create.go:289] output of [docker network inspect addons-401977]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-401977 not found
	
	** /stderr **
	I1217 00:04:57.019248   18113 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:04:57.034880   18113 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ef2180}
	I1217 00:04:57.034912   18113 network_create.go:124] attempt to create docker network addons-401977 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1217 00:04:57.034952   18113 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-401977 addons-401977
	I1217 00:04:57.078427   18113 network_create.go:108] docker network addons-401977 192.168.49.0/24 created
	I1217 00:04:57.078455   18113 kic.go:121] calculated static IP "192.168.49.2" for the "addons-401977" container
	I1217 00:04:57.078544   18113 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 00:04:57.093138   18113 cli_runner.go:164] Run: docker volume create addons-401977 --label name.minikube.sigs.k8s.io=addons-401977 --label created_by.minikube.sigs.k8s.io=true
	I1217 00:04:57.109025   18113 oci.go:103] Successfully created a docker volume addons-401977
	I1217 00:04:57.109119   18113 cli_runner.go:164] Run: docker run --rm --name addons-401977-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-401977 --entrypoint /usr/bin/test -v addons-401977:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 00:05:03.664941   18113 cli_runner.go:217] Completed: docker run --rm --name addons-401977-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-401977 --entrypoint /usr/bin/test -v addons-401977:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (6.555775228s)
	I1217 00:05:03.664975   18113 oci.go:107] Successfully prepared a docker volume addons-401977
	I1217 00:05:03.665060   18113 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:05:03.665073   18113 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 00:05:03.665122   18113 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-401977:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 00:05:07.442476   18113 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-401977:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.777307344s)
	I1217 00:05:07.442509   18113 kic.go:203] duration metric: took 3.777432183s to extract preloaded images to volume ...
	W1217 00:05:07.442605   18113 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 00:05:07.442653   18113 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 00:05:07.442703   18113 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 00:05:07.497828   18113 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-401977 --name addons-401977 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-401977 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-401977 --network addons-401977 --ip 192.168.49.2 --volume addons-401977:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 00:05:07.776337   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Running}}
	I1217 00:05:07.794052   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:07.811511   18113 cli_runner.go:164] Run: docker exec addons-401977 stat /var/lib/dpkg/alternatives/iptables
	I1217 00:05:07.855511   18113 oci.go:144] the created container "addons-401977" has a running status.
	I1217 00:05:07.855543   18113 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa...
	I1217 00:05:07.980091   18113 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 00:05:08.007049   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:08.027612   18113 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 00:05:08.027640   18113 kic_runner.go:114] Args: [docker exec --privileged addons-401977 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 00:05:08.075193   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:08.098548   18113 machine.go:94] provisionDockerMachine start ...
	I1217 00:05:08.098656   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:08.117838   18113 main.go:143] libmachine: Using SSH client type: native
	I1217 00:05:08.118184   18113 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1217 00:05:08.118205   18113 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:05:08.248592   18113 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-401977
	
	I1217 00:05:08.248623   18113 ubuntu.go:182] provisioning hostname "addons-401977"
	I1217 00:05:08.248680   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:08.265867   18113 main.go:143] libmachine: Using SSH client type: native
	I1217 00:05:08.266160   18113 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1217 00:05:08.266179   18113 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-401977 && echo "addons-401977" | sudo tee /etc/hostname
	I1217 00:05:08.400856   18113 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-401977
	
	I1217 00:05:08.400920   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:08.420240   18113 main.go:143] libmachine: Using SSH client type: native
	I1217 00:05:08.420542   18113 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1217 00:05:08.420571   18113 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-401977' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-401977/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-401977' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:05:08.544155   18113 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:05:08.544178   18113 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:05:08.544199   18113 ubuntu.go:190] setting up certificates
	I1217 00:05:08.544211   18113 provision.go:84] configureAuth start
	I1217 00:05:08.544264   18113 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-401977
	I1217 00:05:08.560543   18113 provision.go:143] copyHostCerts
	I1217 00:05:08.560617   18113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:05:08.560753   18113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:05:08.560843   18113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:05:08.560915   18113 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.addons-401977 san=[127.0.0.1 192.168.49.2 addons-401977 localhost minikube]
	I1217 00:05:08.715974   18113 provision.go:177] copyRemoteCerts
	I1217 00:05:08.716030   18113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:05:08.716083   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:08.734195   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:08.827185   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:05:08.844679   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 00:05:08.861266   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 00:05:08.877459   18113 provision.go:87] duration metric: took 333.229549ms to configureAuth
	I1217 00:05:08.877481   18113 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:05:08.877634   18113 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:05:08.877718   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:08.895491   18113 main.go:143] libmachine: Using SSH client type: native
	I1217 00:05:08.895708   18113 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1217 00:05:08.895725   18113 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:05:09.151437   18113 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:05:09.151462   18113 machine.go:97] duration metric: took 1.052893209s to provisionDockerMachine
	I1217 00:05:09.151475   18113 client.go:176] duration metric: took 12.305501842s to LocalClient.Create
	I1217 00:05:09.151494   18113 start.go:167] duration metric: took 12.305560138s to libmachine.API.Create "addons-401977"
	I1217 00:05:09.151504   18113 start.go:293] postStartSetup for "addons-401977" (driver="docker")
	I1217 00:05:09.151516   18113 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:05:09.151573   18113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:05:09.151603   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:09.168556   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:09.260136   18113 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:05:09.263403   18113 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:05:09.263423   18113 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:05:09.263432   18113 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:05:09.263492   18113 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:05:09.263518   18113 start.go:296] duration metric: took 112.008672ms for postStartSetup
	I1217 00:05:09.263794   18113 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-401977
	I1217 00:05:09.281751   18113 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/config.json ...
	I1217 00:05:09.282033   18113 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:05:09.282088   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:09.299336   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:09.386590   18113 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:05:09.390890   18113 start.go:128] duration metric: took 12.546921798s to createHost
	I1217 00:05:09.390907   18113 start.go:83] releasing machines lock for "addons-401977", held for 12.54703548s
	I1217 00:05:09.390975   18113 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-401977
	I1217 00:05:09.407214   18113 ssh_runner.go:195] Run: cat /version.json
	I1217 00:05:09.407242   18113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:05:09.407252   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:09.407302   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:09.426384   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:09.426734   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:09.566091   18113 ssh_runner.go:195] Run: systemctl --version
	I1217 00:05:09.572144   18113 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:05:09.603804   18113 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:05:09.608124   18113 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:05:09.608195   18113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:05:09.631576   18113 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 00:05:09.631597   18113 start.go:496] detecting cgroup driver to use...
	I1217 00:05:09.631630   18113 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:05:09.631666   18113 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:05:09.646686   18113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:05:09.658504   18113 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:05:09.658560   18113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:05:09.675140   18113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:05:09.691628   18113 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:05:09.763672   18113 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:05:09.844007   18113 docker.go:234] disabling docker service ...
	I1217 00:05:09.844074   18113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:05:09.860519   18113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:05:09.871833   18113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:05:09.948862   18113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:05:10.025768   18113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:05:10.037108   18113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:05:10.050010   18113 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:05:10.050067   18113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:05:10.059500   18113 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:05:10.059548   18113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:05:10.067651   18113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:05:10.075474   18113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:05:10.083310   18113 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:05:10.090491   18113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:05:10.098154   18113 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:05:10.110589   18113 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:05:10.118304   18113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:05:10.125079   18113 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1217 00:05:10.125127   18113 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1217 00:05:10.136134   18113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:05:10.142818   18113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:05:10.215099   18113 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:05:10.346471   18113 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:05:10.346544   18113 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:05:10.350230   18113 start.go:564] Will wait 60s for crictl version
	I1217 00:05:10.350287   18113 ssh_runner.go:195] Run: which crictl
	I1217 00:05:10.353519   18113 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:05:10.376156   18113 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:05:10.376267   18113 ssh_runner.go:195] Run: crio --version
	I1217 00:05:10.401763   18113 ssh_runner.go:195] Run: crio --version
	I1217 00:05:10.428658   18113 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 00:05:10.429739   18113 cli_runner.go:164] Run: docker network inspect addons-401977 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:05:10.445785   18113 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:05:10.449472   18113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:05:10.458900   18113 kubeadm.go:884] updating cluster {Name:addons-401977 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-401977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:05:10.459042   18113 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:05:10.459103   18113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:05:10.488896   18113 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:05:10.488916   18113 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:05:10.488961   18113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:05:10.512578   18113 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:05:10.512596   18113 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:05:10.512603   18113 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1217 00:05:10.512677   18113 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-401977 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-401977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:05:10.512737   18113 ssh_runner.go:195] Run: crio config
	I1217 00:05:10.555525   18113 cni.go:84] Creating CNI manager for ""
	I1217 00:05:10.555545   18113 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:05:10.555561   18113 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:05:10.555580   18113 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-401977 NodeName:addons-401977 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:05:10.555682   18113 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-401977"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:05:10.555739   18113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 00:05:10.563646   18113 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:05:10.563710   18113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:05:10.571116   18113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 00:05:10.582523   18113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 00:05:10.596418   18113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1217 00:05:10.607861   18113 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:05:10.611032   18113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:05:10.619690   18113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:05:10.699166   18113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:05:10.723144   18113 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977 for IP: 192.168.49.2
	I1217 00:05:10.723254   18113 certs.go:195] generating shared ca certs ...
	I1217 00:05:10.723285   18113 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:10.723419   18113 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:05:10.889308   18113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt ...
	I1217 00:05:10.889337   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt: {Name:mkad87bcfe71f8fef4f7432aa85f6a4d2072ed3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:10.889497   18113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key ...
	I1217 00:05:10.889510   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key: {Name:mk52d01385ec5a003e642beb7bc53ba5d5e7dff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:10.889612   18113 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:05:11.108126   18113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt ...
	I1217 00:05:11.108152   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt: {Name:mkfbfa16ae86e4ac20e66123d6e5c2357f8d504f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:11.108302   18113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key ...
	I1217 00:05:11.108313   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key: {Name:mk285953dc4c55f55c7256d71c31a2f9f336c4e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:11.108379   18113 certs.go:257] generating profile certs ...
	I1217 00:05:11.108431   18113 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.key
	I1217 00:05:11.108445   18113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt with IP's: []
	I1217 00:05:11.277198   18113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt ...
	I1217 00:05:11.277224   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: {Name:mkd11c3a01e684631f2f40bb6ba4f4d6517cdc7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:11.277375   18113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.key ...
	I1217 00:05:11.277386   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.key: {Name:mk057f36ae4609090c04147c0dc0e7f184016f49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:11.277455   18113 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.key.02537cb5
	I1217 00:05:11.277473   18113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.crt.02537cb5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1217 00:05:11.518968   18113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.crt.02537cb5 ...
	I1217 00:05:11.519005   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.crt.02537cb5: {Name:mk5a6e35f1890295558c040c004f2f7d78d1bed4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:11.519155   18113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.key.02537cb5 ...
	I1217 00:05:11.519169   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.key.02537cb5: {Name:mk234d90ae11d8a3b4b3e4083e99530de84ea660 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:11.519241   18113 certs.go:382] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.crt.02537cb5 -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.crt
	I1217 00:05:11.519320   18113 certs.go:386] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.key.02537cb5 -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.key
	I1217 00:05:11.519397   18113 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/proxy-client.key
	I1217 00:05:11.519425   18113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/proxy-client.crt with IP's: []
	I1217 00:05:11.630780   18113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/proxy-client.crt ...
	I1217 00:05:11.630811   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/proxy-client.crt: {Name:mk2aa7b0d83f33ae26f405c809be02f906021b76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:11.630969   18113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/proxy-client.key ...
	I1217 00:05:11.631001   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/proxy-client.key: {Name:mkbc54d3bf0f6d79316494e4c7184a7ab041fbb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:11.631171   18113 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:05:11.631209   18113 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:05:11.631235   18113 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:05:11.631273   18113 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:05:11.631894   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:05:11.649357   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:05:11.666174   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:05:11.683394   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:05:11.699621   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 00:05:11.715356   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:05:11.732370   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:05:11.748700   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:05:11.764999   18113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:05:11.782575   18113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:05:11.794563   18113 ssh_runner.go:195] Run: openssl version
	I1217 00:05:11.800464   18113 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:05:11.807039   18113 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:05:11.816084   18113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:05:11.819831   18113 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:05:11.819890   18113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:05:11.853012   18113 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:05:11.860609   18113 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 00:05:11.867286   18113 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:05:11.870403   18113 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 00:05:11.870442   18113 kubeadm.go:401] StartCluster: {Name:addons-401977 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-401977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:05:11.870498   18113 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:05:11.870530   18113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:05:11.895249   18113 cri.go:89] found id: ""
	I1217 00:05:11.895305   18113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:05:11.903188   18113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:05:11.910421   18113 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:05:11.910463   18113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:05:11.917442   18113 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:05:11.917456   18113 kubeadm.go:158] found existing configuration files:
	
	I1217 00:05:11.917485   18113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 00:05:11.924460   18113 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:05:11.924511   18113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:05:11.931174   18113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 00:05:11.938190   18113 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:05:11.938225   18113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:05:11.944705   18113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 00:05:11.951151   18113 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:05:11.951195   18113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:05:11.957894   18113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 00:05:11.964633   18113 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:05:11.964697   18113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:05:11.971053   18113 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:05:12.023229   18113 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 00:05:12.076724   18113 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:05:20.712347   18113 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1217 00:05:20.712420   18113 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:05:20.712502   18113 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:05:20.712600   18113 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 00:05:20.712654   18113 kubeadm.go:319] OS: Linux
	I1217 00:05:20.712696   18113 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:05:20.712757   18113 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:05:20.712824   18113 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:05:20.712870   18113 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:05:20.712931   18113 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:05:20.713022   18113 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:05:20.713077   18113 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:05:20.713117   18113 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 00:05:20.713179   18113 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:05:20.713265   18113 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:05:20.713353   18113 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:05:20.713407   18113 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:05:20.714893   18113 out.go:252]   - Generating certificates and keys ...
	I1217 00:05:20.714964   18113 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:05:20.715055   18113 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:05:20.715115   18113 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 00:05:20.715184   18113 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 00:05:20.715240   18113 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 00:05:20.715287   18113 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 00:05:20.715346   18113 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 00:05:20.715453   18113 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-401977 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 00:05:20.715499   18113 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 00:05:20.715615   18113 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-401977 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 00:05:20.715672   18113 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 00:05:20.715726   18113 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 00:05:20.715769   18113 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 00:05:20.715839   18113 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:05:20.715885   18113 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:05:20.715971   18113 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:05:20.716066   18113 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:05:20.716217   18113 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:05:20.716290   18113 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:05:20.716418   18113 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:05:20.716487   18113 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:05:20.717697   18113 out.go:252]   - Booting up control plane ...
	I1217 00:05:20.717793   18113 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:05:20.717864   18113 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:05:20.717922   18113 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:05:20.718059   18113 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:05:20.718182   18113 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:05:20.718290   18113 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:05:20.718395   18113 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:05:20.718468   18113 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:05:20.718662   18113 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:05:20.718808   18113 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:05:20.718866   18113 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.709975ms
	I1217 00:05:20.718985   18113 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 00:05:20.719104   18113 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1217 00:05:20.719218   18113 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 00:05:20.719296   18113 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 00:05:20.719412   18113 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.506189451s
	I1217 00:05:20.719504   18113 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.441238256s
	I1217 00:05:20.719598   18113 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501420628s
	I1217 00:05:20.719731   18113 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 00:05:20.719849   18113 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 00:05:20.719900   18113 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 00:05:20.720116   18113 kubeadm.go:319] [mark-control-plane] Marking the node addons-401977 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 00:05:20.720194   18113 kubeadm.go:319] [bootstrap-token] Using token: vnha8o.m9mo8pfeym1waa3p
	I1217 00:05:20.721480   18113 out.go:252]   - Configuring RBAC rules ...
	I1217 00:05:20.721585   18113 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 00:05:20.721678   18113 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 00:05:20.721851   18113 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 00:05:20.722033   18113 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 00:05:20.722203   18113 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 00:05:20.722330   18113 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 00:05:20.722433   18113 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 00:05:20.722486   18113 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 00:05:20.722547   18113 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 00:05:20.722560   18113 kubeadm.go:319] 
	I1217 00:05:20.722644   18113 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 00:05:20.722659   18113 kubeadm.go:319] 
	I1217 00:05:20.722773   18113 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 00:05:20.722784   18113 kubeadm.go:319] 
	I1217 00:05:20.722819   18113 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 00:05:20.722921   18113 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 00:05:20.723014   18113 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 00:05:20.723024   18113 kubeadm.go:319] 
	I1217 00:05:20.723101   18113 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 00:05:20.723110   18113 kubeadm.go:319] 
	I1217 00:05:20.723176   18113 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 00:05:20.723185   18113 kubeadm.go:319] 
	I1217 00:05:20.723262   18113 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 00:05:20.723382   18113 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 00:05:20.723446   18113 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 00:05:20.723452   18113 kubeadm.go:319] 
	I1217 00:05:20.723520   18113 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 00:05:20.723586   18113 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 00:05:20.723591   18113 kubeadm.go:319] 
	I1217 00:05:20.723672   18113 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token vnha8o.m9mo8pfeym1waa3p \
	I1217 00:05:20.723763   18113 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a7c34974519aee4953e03245da076d7a2eba06e40135880a85806e2dab303fa1 \
	I1217 00:05:20.723782   18113 kubeadm.go:319] 	--control-plane 
	I1217 00:05:20.723787   18113 kubeadm.go:319] 
	I1217 00:05:20.723900   18113 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 00:05:20.723910   18113 kubeadm.go:319] 
	I1217 00:05:20.724068   18113 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token vnha8o.m9mo8pfeym1waa3p \
	I1217 00:05:20.724207   18113 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a7c34974519aee4953e03245da076d7a2eba06e40135880a85806e2dab303fa1 
	I1217 00:05:20.724222   18113 cni.go:84] Creating CNI manager for ""
	I1217 00:05:20.724230   18113 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:05:20.725530   18113 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 00:05:20.726608   18113 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 00:05:20.730767   18113 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1217 00:05:20.730782   18113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1217 00:05:20.743299   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 00:05:20.936870   18113 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 00:05:20.936968   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:05:20.936982   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-401977 minikube.k8s.io/updated_at=2025_12_17T00_05_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1 minikube.k8s.io/name=addons-401977 minikube.k8s.io/primary=true
	I1217 00:05:21.010781   18113 ops.go:34] apiserver oom_adj: -16
	I1217 00:05:21.010829   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:05:21.511034   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:05:22.011485   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:05:22.510981   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:05:23.011226   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:05:23.511238   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:05:24.011136   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:05:24.510903   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:05:25.010982   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:05:25.511879   18113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:05:25.573514   18113 kubeadm.go:1114] duration metric: took 4.63660723s to wait for elevateKubeSystemPrivileges
	I1217 00:05:25.573554   18113 kubeadm.go:403] duration metric: took 13.703113211s to StartCluster
	I1217 00:05:25.573590   18113 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:25.573709   18113 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:05:25.574223   18113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:25.574416   18113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 00:05:25.574442   18113 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:05:25.574505   18113 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1217 00:05:25.574633   18113 addons.go:70] Setting inspektor-gadget=true in profile "addons-401977"
	I1217 00:05:25.574645   18113 addons.go:70] Setting metrics-server=true in profile "addons-401977"
	I1217 00:05:25.574660   18113 addons.go:239] Setting addon inspektor-gadget=true in "addons-401977"
	I1217 00:05:25.574668   18113 addons.go:239] Setting addon metrics-server=true in "addons-401977"
	I1217 00:05:25.574703   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.574712   18113 addons.go:70] Setting gcp-auth=true in profile "addons-401977"
	I1217 00:05:25.574705   18113 addons.go:70] Setting default-storageclass=true in profile "addons-401977"
	I1217 00:05:25.574733   18113 mustload.go:66] Loading cluster: addons-401977
	I1217 00:05:25.574748   18113 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-401977"
	I1217 00:05:25.574809   18113 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-401977"
	I1217 00:05:25.574848   18113 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-401977"
	I1217 00:05:25.574828   18113 addons.go:70] Setting cloud-spanner=true in profile "addons-401977"
	I1217 00:05:25.574884   18113 addons.go:239] Setting addon cloud-spanner=true in "addons-401977"
	I1217 00:05:25.574885   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.574918   18113 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:05:25.574900   18113 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-401977"
	I1217 00:05:25.574950   18113 addons.go:70] Setting storage-provisioner=true in profile "addons-401977"
	I1217 00:05:25.574971   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.574981   18113 addons.go:239] Setting addon storage-provisioner=true in "addons-401977"
	I1217 00:05:25.575023   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.575032   18113 addons.go:70] Setting registry=true in profile "addons-401977"
	I1217 00:05:25.575778   18113 addons.go:70] Setting ingress-dns=true in profile "addons-401977"
	I1217 00:05:25.575805   18113 addons.go:239] Setting addon ingress-dns=true in "addons-401977"
	I1217 00:05:25.575842   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.575708   18113 addons.go:70] Setting ingress=true in profile "addons-401977"
	I1217 00:05:25.576023   18113 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-401977"
	I1217 00:05:25.576045   18113 addons.go:239] Setting addon ingress=true in "addons-401977"
	I1217 00:05:25.576073   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.576089   18113 addons.go:70] Setting registry-creds=true in profile "addons-401977"
	I1217 00:05:25.576106   18113 addons.go:239] Setting addon registry-creds=true in "addons-401977"
	I1217 00:05:25.576121   18113 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-401977"
	I1217 00:05:25.576130   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.576136   18113 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-401977"
	I1217 00:05:25.576261   18113 addons.go:70] Setting volcano=true in profile "addons-401977"
	I1217 00:05:25.576274   18113 addons.go:239] Setting addon volcano=true in "addons-401977"
	I1217 00:05:25.576295   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.576314   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.576450   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.576588   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.575787   18113 addons.go:239] Setting addon registry=true in "addons-401977"
	I1217 00:05:25.576727   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.576949   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.577208   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.577507   18113 out.go:179] * Verifying Kubernetes components...
	I1217 00:05:25.574705   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.577530   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.578044   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.578294   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.576045   18113 addons.go:70] Setting volumesnapshots=true in profile "addons-401977"
	I1217 00:05:25.578489   18113 addons.go:239] Setting addon volumesnapshots=true in "addons-401977"
	I1217 00:05:25.578517   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.578784   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.578794   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.574705   18113 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:05:25.577728   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.580583   18113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:05:25.581378   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.575762   18113 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-401977"
	I1217 00:05:25.582363   18113 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-401977"
	I1217 00:05:25.582394   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.574633   18113 addons.go:70] Setting yakd=true in profile "addons-401977"
	I1217 00:05:25.582436   18113 addons.go:239] Setting addon yakd=true in "addons-401977"
	I1217 00:05:25.582455   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.583340   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.583352   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.587371   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.577746   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.583344   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.589284   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.623930   18113 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-401977"
	I1217 00:05:25.623984   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.624520   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.628511   18113 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1217 00:05:25.629863   18113 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 00:05:25.629886   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1217 00:05:25.629951   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.635829   18113 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1217 00:05:25.637951   18113 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1217 00:05:25.638875   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1217 00:05:25.638982   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	W1217 00:05:25.653461   18113 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1217 00:05:25.658946   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.658968   18113 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1217 00:05:25.661648   18113 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 00:05:25.661667   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1217 00:05:25.661735   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.661916   18113 addons.go:239] Setting addon default-storageclass=true in "addons-401977"
	I1217 00:05:25.661955   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:25.662395   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:25.677044   18113 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1217 00:05:25.679284   18113 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1217 00:05:25.679756   18113 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1217 00:05:25.679836   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.685075   18113 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1217 00:05:25.689845   18113 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:05:25.691032   18113 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1217 00:05:25.691063   18113 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1217 00:05:25.691123   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.691414   18113 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:05:25.691432   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:05:25.691486   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.693847   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.697650   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.700417   18113 out.go:179]   - Using image docker.io/registry:3.0.0
	I1217 00:05:25.701912   18113 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1217 00:05:25.701901   18113 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1217 00:05:25.702141   18113 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1217 00:05:25.703215   18113 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1217 00:05:25.703233   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1217 00:05:25.703305   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.704124   18113 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1217 00:05:25.704151   18113 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1217 00:05:25.704228   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.707175   18113 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1217 00:05:25.708434   18113 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1217 00:05:25.709520   18113 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1217 00:05:25.710856   18113 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1217 00:05:25.712124   18113 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1217 00:05:25.713283   18113 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1217 00:05:25.714738   18113 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 00:05:25.714800   18113 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1217 00:05:25.716223   18113 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1217 00:05:25.716229   18113 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1217 00:05:25.717745   18113 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1217 00:05:25.717827   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.725738   18113 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 00:05:25.725821   18113 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1217 00:05:25.725912   18113 out.go:179]   - Using image docker.io/busybox:stable
	I1217 00:05:25.725982   18113 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1217 00:05:25.727335   18113 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 00:05:25.727351   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1217 00:05:25.727409   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.727908   18113 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1217 00:05:25.727913   18113 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1217 00:05:25.728284   18113 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 00:05:25.728301   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1217 00:05:25.728348   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.728491   18113 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1217 00:05:25.728501   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1217 00:05:25.729054   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.729468   18113 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 00:05:25.729484   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1217 00:05:25.729540   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.730161   18113 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 00:05:25.730469   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1217 00:05:25.730660   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.732081   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.737079   18113 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:05:25.737099   18113 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:05:25.737146   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:25.757628   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.760718   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.776255   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.778157   18113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 00:05:25.795856   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.796759   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.807222   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.808319   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.817846   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.818278   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.821110   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.825929   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.830723   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:25.834516   18113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1217 00:05:25.835046   18113 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1217 00:05:25.835101   18113 retry.go:31] will retry after 129.374855ms: ssh: handshake failed: EOF
	I1217 00:05:25.925653   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 00:05:25.935321   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1217 00:05:25.946426   18113 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1217 00:05:25.946451   18113 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1217 00:05:25.980051   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 00:05:25.988313   18113 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1217 00:05:25.988337   18113 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1217 00:05:25.994586   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:05:26.001723   18113 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1217 00:05:26.001743   18113 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1217 00:05:26.014447   18113 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1217 00:05:26.014469   18113 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1217 00:05:26.015981   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 00:05:26.016680   18113 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1217 00:05:26.016826   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1217 00:05:26.019003   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1217 00:05:26.022518   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 00:05:26.028131   18113 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1217 00:05:26.028149   18113 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1217 00:05:26.035087   18113 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1217 00:05:26.035113   18113 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1217 00:05:26.044281   18113 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1217 00:05:26.044314   18113 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1217 00:05:26.045536   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 00:05:26.052433   18113 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1217 00:05:26.052544   18113 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1217 00:05:26.052687   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:05:26.063544   18113 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1217 00:05:26.063624   18113 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1217 00:05:26.082433   18113 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1217 00:05:26.082453   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1217 00:05:26.088579   18113 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1217 00:05:26.088607   18113 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1217 00:05:26.100929   18113 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1217 00:05:26.101054   18113 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1217 00:05:26.102001   18113 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 00:05:26.102016   18113 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1217 00:05:26.123082   18113 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1217 00:05:26.123107   18113 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1217 00:05:26.133573   18113 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 00:05:26.133599   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1217 00:05:26.139443   18113 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1217 00:05:26.139467   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1217 00:05:26.142584   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1217 00:05:26.157472   18113 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1217 00:05:26.157498   18113 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1217 00:05:26.164411   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 00:05:26.167919   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 00:05:26.179656   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 00:05:26.183124   18113 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1217 00:05:26.185028   18113 node_ready.go:35] waiting up to 6m0s for node "addons-401977" to be "Ready" ...
	I1217 00:05:26.186207   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1217 00:05:26.294149   18113 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1217 00:05:26.294180   18113 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1217 00:05:26.372091   18113 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1217 00:05:26.372120   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1217 00:05:26.435844   18113 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1217 00:05:26.435873   18113 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1217 00:05:26.487157   18113 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1217 00:05:26.487189   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1217 00:05:26.537871   18113 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1217 00:05:26.537894   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1217 00:05:26.585689   18113 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 00:05:26.585722   18113 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1217 00:05:26.621601   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 00:05:26.694507   18113 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-401977" context rescaled to 1 replicas
	I1217 00:05:26.943950   18113 addons.go:495] Verifying addon registry=true in "addons-401977"
	I1217 00:05:26.945762   18113 out.go:179] * Verifying registry addon...
	I1217 00:05:26.948713   18113 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1217 00:05:26.951300   18113 addons.go:495] Verifying addon metrics-server=true in "addons-401977"
	I1217 00:05:26.954003   18113 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 00:05:26.954024   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1217 00:05:26.956545   18113 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1217 00:05:27.152488   18113 addons.go:495] Verifying addon ingress=true in "addons-401977"
	I1217 00:05:27.153781   18113 out.go:179] * Verifying ingress addon...
	I1217 00:05:27.155679   18113 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1217 00:05:27.158087   18113 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1217 00:05:27.158108   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:27.452468   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:27.515548   18113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.335842079s)
	W1217 00:05:27.515615   18113 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 00:05:27.515632   18113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.329384673s)
	I1217 00:05:27.515642   18113 retry.go:31] will retry after 150.023973ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 00:05:27.515897   18113 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-401977"
	I1217 00:05:27.519903   18113 out.go:179] * Verifying csi-hostpath-driver addon...
	I1217 00:05:27.519906   18113 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-401977 service yakd-dashboard -n yakd-dashboard
	
	I1217 00:05:27.522362   18113 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1217 00:05:27.524462   18113 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 00:05:27.524475   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:27.659849   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:27.665953   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 00:05:27.952961   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:28.025144   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:28.160033   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:28.187983   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:28.452118   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:28.552269   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:28.659623   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:28.951375   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:29.025924   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:29.159021   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:29.451736   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:29.552781   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:29.658286   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:29.952045   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:30.024964   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:30.091841   18113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.42584844s)
	I1217 00:05:30.158763   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:30.452109   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:30.553183   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:30.659306   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:30.687684   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:30.951860   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:31.025160   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:31.159257   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:31.452229   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:31.552724   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:31.658268   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:31.952502   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:32.024700   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:32.158826   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:32.452123   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:32.553345   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:32.658855   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:32.951974   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:33.025605   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:33.158765   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:33.187880   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:33.268060   18113 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1217 00:05:33.268126   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:33.285232   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:33.389351   18113 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1217 00:05:33.401815   18113 addons.go:239] Setting addon gcp-auth=true in "addons-401977"
	I1217 00:05:33.401867   18113 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:05:33.402272   18113 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:05:33.420007   18113 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1217 00:05:33.420075   18113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:05:33.436535   18113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:05:33.451782   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:33.525584   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:33.527807   18113 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1217 00:05:33.529176   18113 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 00:05:33.530497   18113 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1217 00:05:33.530511   18113 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1217 00:05:33.543144   18113 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1217 00:05:33.543163   18113 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1217 00:05:33.555413   18113 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 00:05:33.555433   18113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1217 00:05:33.567673   18113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 00:05:33.658738   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:33.858630   18113 addons.go:495] Verifying addon gcp-auth=true in "addons-401977"
	I1217 00:05:33.859909   18113 out.go:179] * Verifying gcp-auth addon...
	I1217 00:05:33.861843   18113 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1217 00:05:33.863983   18113 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1217 00:05:33.864005   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:33.951250   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:34.025979   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:34.158506   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:34.365311   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:34.452240   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:34.524986   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:34.659002   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:34.864878   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:34.951147   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:35.025329   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:35.158897   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:35.188214   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:35.364774   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:35.451184   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:35.525836   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:35.658673   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:35.864668   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:35.952258   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:36.025610   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:36.159237   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:36.365122   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:36.451599   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:36.525453   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:36.659062   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:36.864845   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:36.951144   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:37.025550   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:37.158933   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:37.364632   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:37.451943   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:37.525660   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:37.658056   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:37.688413   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:37.864801   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:37.951837   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:38.025245   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:38.159076   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:38.364524   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:38.451868   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:38.525420   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:38.658931   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:38.864839   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:38.952236   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:39.025667   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:39.158104   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:39.364902   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:39.451212   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:39.525608   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:39.659288   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:39.865413   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:39.951789   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:40.025318   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:40.158816   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:40.188024   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:40.364468   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:40.451671   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:40.525058   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:40.659053   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:40.864965   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:40.951385   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:41.025604   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:41.159144   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:41.364884   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:41.451196   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:41.525543   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:41.659359   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:41.865203   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:41.951248   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:42.025636   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:42.159193   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:42.188114   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:42.364532   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:42.451702   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:42.525059   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:42.659395   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:42.865463   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:42.966251   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:43.025528   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:43.159177   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:43.364672   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:43.451945   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:43.525340   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:43.658930   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:43.864927   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:43.951171   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:44.025375   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:44.158752   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:44.364348   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:44.451788   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:44.525109   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:44.658703   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:44.688203   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:44.864803   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:44.951036   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:45.025275   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:45.158722   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:45.365151   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:45.451429   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:45.524701   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:45.658520   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:45.864156   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:45.951609   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:46.025062   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:46.158569   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:46.364976   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:46.451384   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:46.525788   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:46.658712   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:46.864240   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:46.951682   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:47.025059   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:47.158583   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:47.187723   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:47.365652   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:47.451814   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:47.525118   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:47.658732   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:47.864062   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:47.951284   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:48.025628   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:48.158078   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:48.364544   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:48.452067   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:48.525554   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:48.659206   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:48.864937   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:48.951233   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:49.025567   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:49.159094   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:49.365620   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:49.451941   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:49.525320   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:49.658961   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:49.688289   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:49.864873   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:49.951228   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:50.025387   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:50.159077   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:50.364859   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:50.451214   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:50.525877   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:50.658326   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:50.865527   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:50.966060   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:51.025014   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:51.158592   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:51.365044   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:51.451252   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:51.525669   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:51.659365   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:51.865351   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:51.951569   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:52.024643   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:52.158453   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:52.187719   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:52.365307   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:52.451624   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:52.524974   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:52.658728   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:52.864405   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:52.951700   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:53.024696   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:53.158083   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:53.364850   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:53.451023   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:53.525848   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:53.658509   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:53.865102   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:53.951386   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:54.025584   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:54.159099   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:54.188094   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:54.364590   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:54.451934   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:54.525230   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:54.659286   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:54.864767   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:54.952083   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:55.025335   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:55.159063   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:55.364813   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:55.451087   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:55.525391   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:55.659261   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:55.865117   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:55.951290   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:56.025569   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:56.158958   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:56.364699   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:56.452050   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:56.525310   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:56.658963   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:56.688228   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:56.864604   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:56.951966   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:57.025502   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:57.158905   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:57.364637   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:57.451668   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:57.525181   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:57.658947   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:57.864790   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:57.950738   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:58.024781   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:58.158108   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:58.364692   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:58.451804   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:58.525204   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:58.659281   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:58.865262   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:58.951580   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:59.024758   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:59.158233   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:05:59.186839   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:05:59.364601   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:59.451841   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:05:59.525068   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:05:59.658781   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:05:59.864650   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:05:59.951898   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:00.025372   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:00.158798   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:00.365084   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:00.451244   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:00.525474   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:00.659107   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:00.864781   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:00.951238   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:01.026809   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:01.158267   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:06:01.187312   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:06:01.364599   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:01.452034   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:01.525472   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:01.659337   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:01.864191   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:01.951577   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:02.024630   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:02.159297   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:02.364763   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:02.452028   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:02.525535   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:02.659213   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:02.865095   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:02.951392   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:03.025803   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:03.158055   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:06:03.187969   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:06:03.365032   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:03.451113   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:03.525226   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:03.659139   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:03.864776   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:03.950975   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:04.025228   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:04.158575   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:04.364881   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:04.451219   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:04.525529   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:04.659143   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:04.864911   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:04.951160   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:05.025388   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:05.158882   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:05.364155   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:05.451385   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:05.524889   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:05.658492   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:06:05.687934   18113 node_ready.go:57] node "addons-401977" has "Ready":"False" status (will retry)
	I1217 00:06:05.864329   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:05.951625   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:06.024944   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:06.158487   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:06.364761   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:06.450877   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:06.525349   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:06.658837   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:06.877712   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:06.951596   18113 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 00:06:06.951616   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:07.024920   18113 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 00:06:07.024946   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:07.158597   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:07.187898   18113 node_ready.go:49] node "addons-401977" is "Ready"
	I1217 00:06:07.187931   18113 node_ready.go:38] duration metric: took 41.002874319s for node "addons-401977" to be "Ready" ...
	I1217 00:06:07.187948   18113 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:06:07.188019   18113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:06:07.208592   18113 api_server.go:72] duration metric: took 41.634108933s to wait for apiserver process to appear ...
	I1217 00:06:07.208629   18113 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:06:07.208654   18113 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 00:06:07.213828   18113 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1217 00:06:07.214882   18113 api_server.go:141] control plane version: v1.34.2
	I1217 00:06:07.214914   18113 api_server.go:131] duration metric: took 6.277134ms to wait for apiserver health ...
	I1217 00:06:07.214926   18113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:06:07.261289   18113 system_pods.go:59] 20 kube-system pods found
	I1217 00:06:07.261332   18113 system_pods.go:61] "amd-gpu-device-plugin-zhxtw" [39b7820e-9767-4f89-a35e-e8e970dc8ced] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 00:06:07.261342   18113 system_pods.go:61] "coredns-66bc5c9577-pqbbw" [932eceaf-63fa-4947-b6bd-9022183fe57b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:06:07.261353   18113 system_pods.go:61] "csi-hostpath-attacher-0" [cc167fc5-9598-4c16-9567-00a80fc242c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 00:06:07.261361   18113 system_pods.go:61] "csi-hostpath-resizer-0" [f28b0d1f-8e42-4c55-8691-07d3af4af925] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 00:06:07.261369   18113 system_pods.go:61] "csi-hostpathplugin-bc4sr" [1f387290-1028-4a87-8a5d-26cb403754c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 00:06:07.261375   18113 system_pods.go:61] "etcd-addons-401977" [f49a181f-e850-428b-8412-a15b29a0fafb] Running
	I1217 00:06:07.261384   18113 system_pods.go:61] "kindnet-h5jgb" [6db99c0c-f95c-4610-abb5-b9dbcc985fd7] Running
	I1217 00:06:07.261390   18113 system_pods.go:61] "kube-apiserver-addons-401977" [2c604fe0-534d-4aee-b254-45f298b455f1] Running
	I1217 00:06:07.261396   18113 system_pods.go:61] "kube-controller-manager-addons-401977" [d5432463-cc2b-4d3a-9268-c0fbfdd5272f] Running
	I1217 00:06:07.261404   18113 system_pods.go:61] "kube-ingress-dns-minikube" [fd9e50d9-c944-4528-9420-199a55f88ca6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 00:06:07.261412   18113 system_pods.go:61] "kube-proxy-rgd8j" [7054d552-b932-49a5-83ba-68fd7943c0c4] Running
	I1217 00:06:07.261419   18113 system_pods.go:61] "kube-scheduler-addons-401977" [91fdd9b7-07ba-4338-a158-f5edfdcac7ac] Running
	I1217 00:06:07.261427   18113 system_pods.go:61] "metrics-server-85b7d694d7-krz87" [e7e57a4b-dfdd-48e7-93e6-72b817b73907] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 00:06:07.261436   18113 system_pods.go:61] "nvidia-device-plugin-daemonset-xk8ql" [6f8c2cc8-3d77-495a-902d-fc67c36cde4d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 00:06:07.261446   18113 system_pods.go:61] "registry-6b586f9694-z62qp" [bddc8662-c9eb-4392-837b-010328dd2e70] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 00:06:07.261454   18113 system_pods.go:61] "registry-creds-764b6fb674-5ddkb" [99c64f75-ab3b-49e5-b5d9-f425e95c71c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 00:06:07.261465   18113 system_pods.go:61] "registry-proxy-58fhj" [9b3ecc60-f54f-46fd-8a40-a56e4574bb5b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 00:06:07.261476   18113 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bs9sb" [1002c903-bd7c-4827-8b43-4bb428bbab2b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:06:07.261485   18113 system_pods.go:61] "snapshot-controller-7d9fbc56b8-tqvm8" [35e50a35-6f1d-423a-8a7a-c09636dfbfdb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:06:07.261492   18113 system_pods.go:61] "storage-provisioner" [a2ddca2b-2eea-4f4c-b89d-c0d6966b5fb1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:06:07.261499   18113 system_pods.go:74] duration metric: took 46.565829ms to wait for pod list to return data ...
	I1217 00:06:07.261511   18113 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:06:07.264154   18113 default_sa.go:45] found service account: "default"
	I1217 00:06:07.264181   18113 default_sa.go:55] duration metric: took 2.663055ms for default service account to be created ...
	I1217 00:06:07.264194   18113 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 00:06:07.361198   18113 system_pods.go:86] 20 kube-system pods found
	I1217 00:06:07.361227   18113 system_pods.go:89] "amd-gpu-device-plugin-zhxtw" [39b7820e-9767-4f89-a35e-e8e970dc8ced] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 00:06:07.361234   18113 system_pods.go:89] "coredns-66bc5c9577-pqbbw" [932eceaf-63fa-4947-b6bd-9022183fe57b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:06:07.361241   18113 system_pods.go:89] "csi-hostpath-attacher-0" [cc167fc5-9598-4c16-9567-00a80fc242c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 00:06:07.361246   18113 system_pods.go:89] "csi-hostpath-resizer-0" [f28b0d1f-8e42-4c55-8691-07d3af4af925] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 00:06:07.361252   18113 system_pods.go:89] "csi-hostpathplugin-bc4sr" [1f387290-1028-4a87-8a5d-26cb403754c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 00:06:07.361255   18113 system_pods.go:89] "etcd-addons-401977" [f49a181f-e850-428b-8412-a15b29a0fafb] Running
	I1217 00:06:07.361260   18113 system_pods.go:89] "kindnet-h5jgb" [6db99c0c-f95c-4610-abb5-b9dbcc985fd7] Running
	I1217 00:06:07.361264   18113 system_pods.go:89] "kube-apiserver-addons-401977" [2c604fe0-534d-4aee-b254-45f298b455f1] Running
	I1217 00:06:07.361267   18113 system_pods.go:89] "kube-controller-manager-addons-401977" [d5432463-cc2b-4d3a-9268-c0fbfdd5272f] Running
	I1217 00:06:07.361273   18113 system_pods.go:89] "kube-ingress-dns-minikube" [fd9e50d9-c944-4528-9420-199a55f88ca6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 00:06:07.361276   18113 system_pods.go:89] "kube-proxy-rgd8j" [7054d552-b932-49a5-83ba-68fd7943c0c4] Running
	I1217 00:06:07.361280   18113 system_pods.go:89] "kube-scheduler-addons-401977" [91fdd9b7-07ba-4338-a158-f5edfdcac7ac] Running
	I1217 00:06:07.361288   18113 system_pods.go:89] "metrics-server-85b7d694d7-krz87" [e7e57a4b-dfdd-48e7-93e6-72b817b73907] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 00:06:07.361293   18113 system_pods.go:89] "nvidia-device-plugin-daemonset-xk8ql" [6f8c2cc8-3d77-495a-902d-fc67c36cde4d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 00:06:07.361303   18113 system_pods.go:89] "registry-6b586f9694-z62qp" [bddc8662-c9eb-4392-837b-010328dd2e70] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 00:06:07.361308   18113 system_pods.go:89] "registry-creds-764b6fb674-5ddkb" [99c64f75-ab3b-49e5-b5d9-f425e95c71c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 00:06:07.361313   18113 system_pods.go:89] "registry-proxy-58fhj" [9b3ecc60-f54f-46fd-8a40-a56e4574bb5b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 00:06:07.361319   18113 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bs9sb" [1002c903-bd7c-4827-8b43-4bb428bbab2b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:06:07.361324   18113 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tqvm8" [35e50a35-6f1d-423a-8a7a-c09636dfbfdb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:06:07.361329   18113 system_pods.go:89] "storage-provisioner" [a2ddca2b-2eea-4f4c-b89d-c0d6966b5fb1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:06:07.361342   18113 retry.go:31] will retry after 241.230941ms: missing components: kube-dns
	I1217 00:06:07.364485   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:07.460584   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:07.526092   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:07.607930   18113 system_pods.go:86] 20 kube-system pods found
	I1217 00:06:07.607967   18113 system_pods.go:89] "amd-gpu-device-plugin-zhxtw" [39b7820e-9767-4f89-a35e-e8e970dc8ced] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 00:06:07.607980   18113 system_pods.go:89] "coredns-66bc5c9577-pqbbw" [932eceaf-63fa-4947-b6bd-9022183fe57b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:06:07.608013   18113 system_pods.go:89] "csi-hostpath-attacher-0" [cc167fc5-9598-4c16-9567-00a80fc242c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 00:06:07.608022   18113 system_pods.go:89] "csi-hostpath-resizer-0" [f28b0d1f-8e42-4c55-8691-07d3af4af925] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 00:06:07.608032   18113 system_pods.go:89] "csi-hostpathplugin-bc4sr" [1f387290-1028-4a87-8a5d-26cb403754c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 00:06:07.608037   18113 system_pods.go:89] "etcd-addons-401977" [f49a181f-e850-428b-8412-a15b29a0fafb] Running
	I1217 00:06:07.608045   18113 system_pods.go:89] "kindnet-h5jgb" [6db99c0c-f95c-4610-abb5-b9dbcc985fd7] Running
	I1217 00:06:07.608051   18113 system_pods.go:89] "kube-apiserver-addons-401977" [2c604fe0-534d-4aee-b254-45f298b455f1] Running
	I1217 00:06:07.608056   18113 system_pods.go:89] "kube-controller-manager-addons-401977" [d5432463-cc2b-4d3a-9268-c0fbfdd5272f] Running
	I1217 00:06:07.608066   18113 system_pods.go:89] "kube-ingress-dns-minikube" [fd9e50d9-c944-4528-9420-199a55f88ca6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 00:06:07.608071   18113 system_pods.go:89] "kube-proxy-rgd8j" [7054d552-b932-49a5-83ba-68fd7943c0c4] Running
	I1217 00:06:07.608077   18113 system_pods.go:89] "kube-scheduler-addons-401977" [91fdd9b7-07ba-4338-a158-f5edfdcac7ac] Running
	I1217 00:06:07.608085   18113 system_pods.go:89] "metrics-server-85b7d694d7-krz87" [e7e57a4b-dfdd-48e7-93e6-72b817b73907] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 00:06:07.608094   18113 system_pods.go:89] "nvidia-device-plugin-daemonset-xk8ql" [6f8c2cc8-3d77-495a-902d-fc67c36cde4d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 00:06:07.608103   18113 system_pods.go:89] "registry-6b586f9694-z62qp" [bddc8662-c9eb-4392-837b-010328dd2e70] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 00:06:07.608112   18113 system_pods.go:89] "registry-creds-764b6fb674-5ddkb" [99c64f75-ab3b-49e5-b5d9-f425e95c71c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 00:06:07.608120   18113 system_pods.go:89] "registry-proxy-58fhj" [9b3ecc60-f54f-46fd-8a40-a56e4574bb5b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 00:06:07.608129   18113 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bs9sb" [1002c903-bd7c-4827-8b43-4bb428bbab2b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:06:07.608147   18113 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tqvm8" [35e50a35-6f1d-423a-8a7a-c09636dfbfdb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:06:07.608155   18113 system_pods.go:89] "storage-provisioner" [a2ddca2b-2eea-4f4c-b89d-c0d6966b5fb1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:06:07.608171   18113 retry.go:31] will retry after 241.43571ms: missing components: kube-dns
	I1217 00:06:07.659453   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:07.855137   18113 system_pods.go:86] 20 kube-system pods found
	I1217 00:06:07.855170   18113 system_pods.go:89] "amd-gpu-device-plugin-zhxtw" [39b7820e-9767-4f89-a35e-e8e970dc8ced] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 00:06:07.855183   18113 system_pods.go:89] "coredns-66bc5c9577-pqbbw" [932eceaf-63fa-4947-b6bd-9022183fe57b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:06:07.855194   18113 system_pods.go:89] "csi-hostpath-attacher-0" [cc167fc5-9598-4c16-9567-00a80fc242c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 00:06:07.855202   18113 system_pods.go:89] "csi-hostpath-resizer-0" [f28b0d1f-8e42-4c55-8691-07d3af4af925] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 00:06:07.855213   18113 system_pods.go:89] "csi-hostpathplugin-bc4sr" [1f387290-1028-4a87-8a5d-26cb403754c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 00:06:07.855222   18113 system_pods.go:89] "etcd-addons-401977" [f49a181f-e850-428b-8412-a15b29a0fafb] Running
	I1217 00:06:07.855228   18113 system_pods.go:89] "kindnet-h5jgb" [6db99c0c-f95c-4610-abb5-b9dbcc985fd7] Running
	I1217 00:06:07.855235   18113 system_pods.go:89] "kube-apiserver-addons-401977" [2c604fe0-534d-4aee-b254-45f298b455f1] Running
	I1217 00:06:07.855244   18113 system_pods.go:89] "kube-controller-manager-addons-401977" [d5432463-cc2b-4d3a-9268-c0fbfdd5272f] Running
	I1217 00:06:07.855252   18113 system_pods.go:89] "kube-ingress-dns-minikube" [fd9e50d9-c944-4528-9420-199a55f88ca6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 00:06:07.855257   18113 system_pods.go:89] "kube-proxy-rgd8j" [7054d552-b932-49a5-83ba-68fd7943c0c4] Running
	I1217 00:06:07.855270   18113 system_pods.go:89] "kube-scheduler-addons-401977" [91fdd9b7-07ba-4338-a158-f5edfdcac7ac] Running
	I1217 00:06:07.855278   18113 system_pods.go:89] "metrics-server-85b7d694d7-krz87" [e7e57a4b-dfdd-48e7-93e6-72b817b73907] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 00:06:07.855291   18113 system_pods.go:89] "nvidia-device-plugin-daemonset-xk8ql" [6f8c2cc8-3d77-495a-902d-fc67c36cde4d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 00:06:07.855303   18113 system_pods.go:89] "registry-6b586f9694-z62qp" [bddc8662-c9eb-4392-837b-010328dd2e70] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 00:06:07.855312   18113 system_pods.go:89] "registry-creds-764b6fb674-5ddkb" [99c64f75-ab3b-49e5-b5d9-f425e95c71c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 00:06:07.855320   18113 system_pods.go:89] "registry-proxy-58fhj" [9b3ecc60-f54f-46fd-8a40-a56e4574bb5b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 00:06:07.855333   18113 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bs9sb" [1002c903-bd7c-4827-8b43-4bb428bbab2b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:06:07.855344   18113 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tqvm8" [35e50a35-6f1d-423a-8a7a-c09636dfbfdb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:06:07.855355   18113 system_pods.go:89] "storage-provisioner" [a2ddca2b-2eea-4f4c-b89d-c0d6966b5fb1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:06:07.855373   18113 retry.go:31] will retry after 466.346009ms: missing components: kube-dns
	I1217 00:06:07.865391   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:07.952138   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:08.025706   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:08.160181   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:08.326205   18113 system_pods.go:86] 20 kube-system pods found
	I1217 00:06:08.326242   18113 system_pods.go:89] "amd-gpu-device-plugin-zhxtw" [39b7820e-9767-4f89-a35e-e8e970dc8ced] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 00:06:08.326250   18113 system_pods.go:89] "coredns-66bc5c9577-pqbbw" [932eceaf-63fa-4947-b6bd-9022183fe57b] Running
	I1217 00:06:08.326261   18113 system_pods.go:89] "csi-hostpath-attacher-0" [cc167fc5-9598-4c16-9567-00a80fc242c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 00:06:08.326269   18113 system_pods.go:89] "csi-hostpath-resizer-0" [f28b0d1f-8e42-4c55-8691-07d3af4af925] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 00:06:08.326279   18113 system_pods.go:89] "csi-hostpathplugin-bc4sr" [1f387290-1028-4a87-8a5d-26cb403754c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 00:06:08.326288   18113 system_pods.go:89] "etcd-addons-401977" [f49a181f-e850-428b-8412-a15b29a0fafb] Running
	I1217 00:06:08.326294   18113 system_pods.go:89] "kindnet-h5jgb" [6db99c0c-f95c-4610-abb5-b9dbcc985fd7] Running
	I1217 00:06:08.326302   18113 system_pods.go:89] "kube-apiserver-addons-401977" [2c604fe0-534d-4aee-b254-45f298b455f1] Running
	I1217 00:06:08.326308   18113 system_pods.go:89] "kube-controller-manager-addons-401977" [d5432463-cc2b-4d3a-9268-c0fbfdd5272f] Running
	I1217 00:06:08.326316   18113 system_pods.go:89] "kube-ingress-dns-minikube" [fd9e50d9-c944-4528-9420-199a55f88ca6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 00:06:08.326323   18113 system_pods.go:89] "kube-proxy-rgd8j" [7054d552-b932-49a5-83ba-68fd7943c0c4] Running
	I1217 00:06:08.326328   18113 system_pods.go:89] "kube-scheduler-addons-401977" [91fdd9b7-07ba-4338-a158-f5edfdcac7ac] Running
	I1217 00:06:08.326348   18113 system_pods.go:89] "metrics-server-85b7d694d7-krz87" [e7e57a4b-dfdd-48e7-93e6-72b817b73907] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 00:06:08.326360   18113 system_pods.go:89] "nvidia-device-plugin-daemonset-xk8ql" [6f8c2cc8-3d77-495a-902d-fc67c36cde4d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 00:06:08.326368   18113 system_pods.go:89] "registry-6b586f9694-z62qp" [bddc8662-c9eb-4392-837b-010328dd2e70] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 00:06:08.326377   18113 system_pods.go:89] "registry-creds-764b6fb674-5ddkb" [99c64f75-ab3b-49e5-b5d9-f425e95c71c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 00:06:08.326384   18113 system_pods.go:89] "registry-proxy-58fhj" [9b3ecc60-f54f-46fd-8a40-a56e4574bb5b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 00:06:08.326395   18113 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bs9sb" [1002c903-bd7c-4827-8b43-4bb428bbab2b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:06:08.326404   18113 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tqvm8" [35e50a35-6f1d-423a-8a7a-c09636dfbfdb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:06:08.326410   18113 system_pods.go:89] "storage-provisioner" [a2ddca2b-2eea-4f4c-b89d-c0d6966b5fb1] Running
	I1217 00:06:08.326419   18113 system_pods.go:126] duration metric: took 1.062196423s to wait for k8s-apps to be running ...
	I1217 00:06:08.326431   18113 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 00:06:08.326481   18113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:06:08.341868   18113 system_svc.go:56] duration metric: took 15.425848ms WaitForService to wait for kubelet
	I1217 00:06:08.341899   18113 kubeadm.go:587] duration metric: took 42.76742531s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:06:08.341920   18113 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:06:08.344868   18113 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 00:06:08.344895   18113 node_conditions.go:123] node cpu capacity is 8
	I1217 00:06:08.344919   18113 node_conditions.go:105] duration metric: took 2.992433ms to run NodePressure ...
	I1217 00:06:08.344934   18113 start.go:242] waiting for startup goroutines ...
	I1217 00:06:08.365543   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:08.452517   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:08.553338   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:08.659132   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:08.864864   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:08.951639   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:09.025390   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:09.159255   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:09.367599   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:09.453605   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:09.526283   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:09.660249   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:09.865811   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:09.951716   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:10.026085   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:10.159176   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:10.364908   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:10.451853   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:10.525883   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:10.660106   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:10.865117   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:10.951719   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:11.025835   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:11.160090   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:11.366490   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:11.452340   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:11.526486   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:11.659510   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:11.865010   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:11.951811   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:12.025546   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:12.159663   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:12.365548   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:12.452235   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:12.526321   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:12.659183   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:12.864875   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:12.951822   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:13.025764   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:13.159370   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:13.365291   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:13.451893   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:13.525651   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:13.659615   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:13.865054   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:13.951837   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:14.025674   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:14.159543   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:14.365568   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:14.452285   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:14.526414   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:14.659354   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:14.865402   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:14.951846   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:15.025276   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:15.158985   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:15.366974   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:15.451830   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:15.525717   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:15.659046   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:15.865205   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:15.952632   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:16.025760   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:16.159905   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:16.365525   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:16.452565   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:16.525715   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:16.659649   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:16.865602   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:16.952478   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:17.026409   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:17.159022   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:17.364648   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:17.452299   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:17.525979   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:17.659227   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:17.864189   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:17.951654   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:18.025605   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:18.159380   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:18.365082   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:18.465261   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:18.525671   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:18.659283   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:18.865101   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:18.952891   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:19.026489   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:19.159627   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:19.365444   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:19.452109   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:19.525913   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:19.658738   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:19.864495   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:19.952470   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:20.026510   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:20.159511   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:20.364981   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:20.451084   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:20.525874   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:20.659418   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:20.865489   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:20.951881   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:21.025762   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:21.159696   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:21.365527   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:21.466357   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:21.526614   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:21.659480   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:21.865965   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:21.951788   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:22.025351   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:22.158967   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:22.364114   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:22.451565   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:22.525191   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:22.658441   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:22.865039   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:22.951125   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:23.026094   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:23.161422   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:23.365503   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:23.452160   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:23.526183   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:23.659113   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:23.864567   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:23.952267   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:24.026352   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:24.158923   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:24.366209   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:24.451695   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:24.525236   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:24.658792   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:24.864020   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:24.951589   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:25.025121   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:25.158466   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:25.365148   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:25.451346   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:25.525890   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:25.658495   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:25.864974   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:25.951645   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:26.025982   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:26.158353   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:26.365190   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:26.465250   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:26.566661   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:26.659433   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:26.864928   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:26.951395   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:27.026640   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:27.158367   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:27.364475   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:27.452325   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:27.526389   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:27.659142   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:27.865682   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:27.952721   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:28.029312   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:28.158685   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:28.365839   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:28.451254   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:28.526329   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:28.659278   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:28.864795   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:28.951400   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:29.026232   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:29.158859   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:29.365691   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:29.452526   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:29.525354   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:29.659249   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:29.865088   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:29.965453   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:30.025863   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:30.159428   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:30.364727   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:30.452575   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:30.525927   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:30.658810   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:30.865140   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:30.951911   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:31.026015   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:31.158713   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:31.364916   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:31.451116   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:31.526361   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:31.660077   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:31.865852   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:31.951774   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:06:32.026040   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:32.160204   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:32.365428   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:32.470620   18113 kapi.go:107] duration metric: took 1m5.52190366s to wait for kubernetes.io/minikube-addons=registry ...
	I1217 00:06:32.525774   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:32.659058   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:32.864916   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:33.025746   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:33.159612   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:33.365335   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:33.526754   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:33.659849   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:33.865120   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:34.026280   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:34.159264   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:34.364974   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:34.525983   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:34.658606   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:34.864972   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:35.025836   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:35.158495   18113 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:06:35.365713   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:35.526105   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:35.659219   18113 kapi.go:107] duration metric: took 1m8.503534052s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1217 00:06:35.864813   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:36.025513   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:36.366379   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:36.527468   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:36.865738   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:37.025970   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:37.364918   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:37.525808   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:37.864778   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:38.026063   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:38.364576   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:38.525750   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:38.864410   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:39.026484   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:39.365919   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:39.525785   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:39.865462   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:40.025699   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:40.365554   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:40.525707   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:40.864487   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:06:41.025586   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:41.365046   18113 kapi.go:107] duration metric: took 1m7.503198603s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1217 00:06:41.366449   18113 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-401977 cluster.
	I1217 00:06:41.367689   18113 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1217 00:06:41.369013   18113 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1217 00:06:41.526531   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:42.026361   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:42.571319   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:43.025480   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:43.526126   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:44.025659   18113 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:06:44.526445   18113 kapi.go:107] duration metric: took 1m17.004079884s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1217 00:06:44.528247   18113 out.go:179] * Enabled addons: amd-gpu-device-plugin, registry-creds, inspektor-gadget, storage-provisioner, nvidia-device-plugin, cloud-spanner, ingress-dns, metrics-server, storage-provisioner-rancher, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1217 00:06:44.529442   18113 addons.go:530] duration metric: took 1m18.954939299s for enable addons: enabled=[amd-gpu-device-plugin registry-creds inspektor-gadget storage-provisioner nvidia-device-plugin cloud-spanner ingress-dns metrics-server storage-provisioner-rancher yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1217 00:06:44.529485   18113 start.go:247] waiting for cluster config update ...
	I1217 00:06:44.529502   18113 start.go:256] writing updated cluster config ...
	I1217 00:06:44.529747   18113 ssh_runner.go:195] Run: rm -f paused
	I1217 00:06:44.533603   18113 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:06:44.536253   18113 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pqbbw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:44.539679   18113 pod_ready.go:94] pod "coredns-66bc5c9577-pqbbw" is "Ready"
	I1217 00:06:44.539697   18113 pod_ready.go:86] duration metric: took 3.423059ms for pod "coredns-66bc5c9577-pqbbw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:44.541350   18113 pod_ready.go:83] waiting for pod "etcd-addons-401977" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:44.544185   18113 pod_ready.go:94] pod "etcd-addons-401977" is "Ready"
	I1217 00:06:44.544199   18113 pod_ready.go:86] duration metric: took 2.833386ms for pod "etcd-addons-401977" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:44.545660   18113 pod_ready.go:83] waiting for pod "kube-apiserver-addons-401977" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:44.548469   18113 pod_ready.go:94] pod "kube-apiserver-addons-401977" is "Ready"
	I1217 00:06:44.548488   18113 pod_ready.go:86] duration metric: took 2.813397ms for pod "kube-apiserver-addons-401977" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:44.550147   18113 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-401977" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:44.937691   18113 pod_ready.go:94] pod "kube-controller-manager-addons-401977" is "Ready"
	I1217 00:06:44.937715   18113 pod_ready.go:86] duration metric: took 387.551285ms for pod "kube-controller-manager-addons-401977" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:45.137259   18113 pod_ready.go:83] waiting for pod "kube-proxy-rgd8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:45.536602   18113 pod_ready.go:94] pod "kube-proxy-rgd8j" is "Ready"
	I1217 00:06:45.536634   18113 pod_ready.go:86] duration metric: took 399.354868ms for pod "kube-proxy-rgd8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:45.738018   18113 pod_ready.go:83] waiting for pod "kube-scheduler-addons-401977" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:46.137572   18113 pod_ready.go:94] pod "kube-scheduler-addons-401977" is "Ready"
	I1217 00:06:46.137599   18113 pod_ready.go:86] duration metric: took 399.553329ms for pod "kube-scheduler-addons-401977" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:06:46.137611   18113 pod_ready.go:40] duration metric: took 1.603984915s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:06:46.179884   18113 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1217 00:06:46.181726   18113 out.go:179] * Done! kubectl is now configured to use "addons-401977" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 00:06:43 addons-401977 crio[774]: time="2025-12-17T00:06:43.515504551Z" level=info msg="Starting container: ad54e02660b07cbde6d493c7d5e3ed172475b94a9eaee4e87d4bd9ef151c0b22" id=5c22fa0c-7406-4129-8e54-9693cb370713 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:06:43 addons-401977 crio[774]: time="2025-12-17T00:06:43.51815227Z" level=info msg="Started container" PID=6079 containerID=ad54e02660b07cbde6d493c7d5e3ed172475b94a9eaee4e87d4bd9ef151c0b22 description=kube-system/csi-hostpathplugin-bc4sr/csi-snapshotter id=5c22fa0c-7406-4129-8e54-9693cb370713 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4bc127d16a688696c8a85953ee77308ef180bddaf2e934a2c1646ca6a8ac0a1d
	Dec 17 00:06:47 addons-401977 crio[774]: time="2025-12-17T00:06:47.031286675Z" level=info msg="Running pod sandbox: default/busybox/POD" id=240126a9-9a86-47a8-9a63-c8f59b49e2eb name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:06:47 addons-401977 crio[774]: time="2025-12-17T00:06:47.03134475Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:06:47 addons-401977 crio[774]: time="2025-12-17T00:06:47.036902152Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7223162eccc2c65e873430a1ea9b9757665817ea0a3cf119252c3a5a3f6742c8 UID:0d3a07bd-259f-4827-8d61-b6a0453c30dc NetNS:/var/run/netns/4317b978-c9d3-4ea6-8136-e8f107a7c123 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000154a90}] Aliases:map[]}"
	Dec 17 00:06:47 addons-401977 crio[774]: time="2025-12-17T00:06:47.036931941Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 17 00:06:47 addons-401977 crio[774]: time="2025-12-17T00:06:47.046231187Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7223162eccc2c65e873430a1ea9b9757665817ea0a3cf119252c3a5a3f6742c8 UID:0d3a07bd-259f-4827-8d61-b6a0453c30dc NetNS:/var/run/netns/4317b978-c9d3-4ea6-8136-e8f107a7c123 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000154a90}] Aliases:map[]}"
	Dec 17 00:06:47 addons-401977 crio[774]: time="2025-12-17T00:06:47.046345903Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 17 00:06:47 addons-401977 crio[774]: time="2025-12-17T00:06:47.047077783Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 00:06:47 addons-401977 crio[774]: time="2025-12-17T00:06:47.047812614Z" level=info msg="Ran pod sandbox 7223162eccc2c65e873430a1ea9b9757665817ea0a3cf119252c3a5a3f6742c8 with infra container: default/busybox/POD" id=240126a9-9a86-47a8-9a63-c8f59b49e2eb name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:06:47 addons-401977 crio[774]: time="2025-12-17T00:06:47.048890343Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=266c3596-0c6e-4344-84d1-b3c871475d3f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:06:47 addons-401977 crio[774]: time="2025-12-17T00:06:47.049027243Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=266c3596-0c6e-4344-84d1-b3c871475d3f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:06:47 addons-401977 crio[774]: time="2025-12-17T00:06:47.049074455Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=266c3596-0c6e-4344-84d1-b3c871475d3f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:06:47 addons-401977 crio[774]: time="2025-12-17T00:06:47.049630285Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4cd4e3c6-59fb-42c3-9dcb-2a109ce6b6a0 name=/runtime.v1.ImageService/PullImage
	Dec 17 00:06:47 addons-401977 crio[774]: time="2025-12-17T00:06:47.051024221Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 17 00:06:47 addons-401977 crio[774]: time="2025-12-17T00:06:47.631521741Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=4cd4e3c6-59fb-42c3-9dcb-2a109ce6b6a0 name=/runtime.v1.ImageService/PullImage
	Dec 17 00:06:47 addons-401977 crio[774]: time="2025-12-17T00:06:47.632086796Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7b4d4dfd-2d15-4fc8-af2e-750256d373ef name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:06:47 addons-401977 crio[774]: time="2025-12-17T00:06:47.633485944Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=92f08614-f619-4fb5-81c0-ee8f05b6ce3c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:06:47 addons-401977 crio[774]: time="2025-12-17T00:06:47.636663951Z" level=info msg="Creating container: default/busybox/busybox" id=d0c1798c-9578-46d7-90dd-4721cdf52f3a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:06:47 addons-401977 crio[774]: time="2025-12-17T00:06:47.636774893Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:06:47 addons-401977 crio[774]: time="2025-12-17T00:06:47.642567881Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:06:47 addons-401977 crio[774]: time="2025-12-17T00:06:47.643197973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:06:47 addons-401977 crio[774]: time="2025-12-17T00:06:47.674868601Z" level=info msg="Created container 103255e2d8ad9335e3270204bf64320f97589b788ac2725fb39714f8729eb62d: default/busybox/busybox" id=d0c1798c-9578-46d7-90dd-4721cdf52f3a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:06:47 addons-401977 crio[774]: time="2025-12-17T00:06:47.675410654Z" level=info msg="Starting container: 103255e2d8ad9335e3270204bf64320f97589b788ac2725fb39714f8729eb62d" id=51e9857f-f86c-4d11-aadb-f42ca114360e name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:06:47 addons-401977 crio[774]: time="2025-12-17T00:06:47.677321778Z" level=info msg="Started container" PID=6196 containerID=103255e2d8ad9335e3270204bf64320f97589b788ac2725fb39714f8729eb62d description=default/busybox/busybox id=51e9857f-f86c-4d11-aadb-f42ca114360e name=/runtime.v1.RuntimeService/StartContainer sandboxID=7223162eccc2c65e873430a1ea9b9757665817ea0a3cf119252c3a5a3f6742c8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	103255e2d8ad9       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          7 seconds ago        Running             busybox                                  0                   7223162eccc2c       busybox                                     default
	ad54e02660b07       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          11 seconds ago       Running             csi-snapshotter                          0                   4bc127d16a688       csi-hostpathplugin-bc4sr                    kube-system
	45b944b097394       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          13 seconds ago       Running             csi-provisioner                          0                   4bc127d16a688       csi-hostpathplugin-bc4sr                    kube-system
	208d6abcaa9c4       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 14 seconds ago       Running             gcp-auth                                 0                   6f4e36339936c       gcp-auth-78565c9fb4-gbjdx                   gcp-auth
	96a32ce198ee5       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            16 seconds ago       Running             liveness-probe                           0                   4bc127d16a688       csi-hostpathplugin-bc4sr                    kube-system
	6060f042efdda       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           16 seconds ago       Running             hostpath                                 0                   4bc127d16a688       csi-hostpathplugin-bc4sr                    kube-system
	eb167c0e68a1e       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            17 seconds ago       Running             gadget                                   0                   1b43f20b34f0e       gadget-8kklk                                gadget
	cfb270351b771       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                19 seconds ago       Running             node-driver-registrar                    0                   4bc127d16a688       csi-hostpathplugin-bc4sr                    kube-system
	96fc010332893       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             20 seconds ago       Running             controller                               0                   29ddfda06c85b       ingress-nginx-controller-85d4c799dd-2ntv7   ingress-nginx
	60a76e334b179       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              24 seconds ago       Running             registry-proxy                           0                   0f9e72c9e7c88       registry-proxy-58fhj                        kube-system
	404a83db71038       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   25 seconds ago       Running             csi-external-health-monitor-controller   0                   4bc127d16a688       csi-hostpathplugin-bc4sr                    kube-system
	7477be1e8e83d       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     26 seconds ago       Running             nvidia-device-plugin-ctr                 0                   85327ca941bc7       nvidia-device-plugin-daemonset-xk8ql        kube-system
	88b4569360ba9       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      28 seconds ago       Running             volume-snapshot-controller               0                   9cc1ce53b231c       snapshot-controller-7d9fbc56b8-bs9sb        kube-system
	7ad73ae76171d       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     29 seconds ago       Running             amd-gpu-device-plugin                    0                   4d11e99e93013       amd-gpu-device-plugin-zhxtw                 kube-system
	9032486c9c486       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   30 seconds ago       Exited              patch                                    0                   313ae24dd8dd0       ingress-nginx-admission-patch-md92j         ingress-nginx
	e77e53ca2e567       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              30 seconds ago       Running             csi-resizer                              0                   84c2ab48daa1f       csi-hostpath-resizer-0                      kube-system
	d5d932c1082d3       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             31 seconds ago       Running             csi-attacher                             0                   cad03d8857931       csi-hostpath-attacher-0                     kube-system
	3aed9abe0707c       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             32 seconds ago       Exited              patch                                    1                   0bad81147254f       gcp-auth-certs-patch-5m5pq                  gcp-auth
	281ee250a9bc1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   32 seconds ago       Exited              create                                   0                   9e360f06445dc       gcp-auth-certs-create-bmhtm                 gcp-auth
	5d7bc94a6e762       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      33 seconds ago       Running             volume-snapshot-controller               0                   4f1c88f6d0153       snapshot-controller-7d9fbc56b8-tqvm8        kube-system
	58060993a0fdf       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   34 seconds ago       Exited              create                                   0                   c5cd8d8303910       ingress-nginx-admission-create-9xxch        ingress-nginx
	1875adcd7ee31       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               34 seconds ago       Running             cloud-spanner-emulator                   0                   147648c7aa617       cloud-spanner-emulator-5bdddb765-z68hl      default
	e46ae5d0bae44       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             37 seconds ago       Running             local-path-provisioner                   0                   56380a7dc8c65       local-path-provisioner-648f6765c9-pxnl9     local-path-storage
	0abee2adb5882       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              38 seconds ago       Running             yakd                                     0                   579a4dfb1f9b3       yakd-dashboard-5ff678cb9-m4h6g              yakd-dashboard
	b3c5366ec83c7       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           40 seconds ago       Running             registry                                 0                   61d76bc91493a       registry-6b586f9694-z62qp                   kube-system
	dfd9e15edab91       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               42 seconds ago       Running             minikube-ingress-dns                     0                   6e8291a8acd06       kube-ingress-dns-minikube                   kube-system
	a380c22257b5c       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        47 seconds ago       Running             metrics-server                           0                   a0a96b039d496       metrics-server-85b7d694d7-krz87             kube-system
	f6e58bb2900bb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             48 seconds ago       Running             coredns                                  0                   83d2029d2973f       coredns-66bc5c9577-pqbbw                    kube-system
	383049ced70e6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             48 seconds ago       Running             storage-provisioner                      0                   643729572f86b       storage-provisioner                         kube-system
	840301d1bb594       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   2361da5dc56fe       kindnet-h5jgb                               kube-system
	950dc7c477829       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             About a minute ago   Running             kube-proxy                               0                   fb57c0e4200cc       kube-proxy-rgd8j                            kube-system
	c85efeb6af746       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             About a minute ago   Running             kube-scheduler                           0                   d9c7b0fa863c0       kube-scheduler-addons-401977                kube-system
	f55d4645a3da6       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             About a minute ago   Running             kube-controller-manager                  0                   bafe11db410e1       kube-controller-manager-addons-401977       kube-system
	a9fb6926bb935       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             About a minute ago   Running             etcd                                     0                   67241c7cf5b2d       etcd-addons-401977                          kube-system
	a5dab92a052f8       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             About a minute ago   Running             kube-apiserver                           0                   d0fb3406ff2c3       kube-apiserver-addons-401977                kube-system
	
	
	==> coredns [f6e58bb2900bb7013f6f81ccca2250cb1b6547be3edcc33d4bc867ae9d0b4072] <==
	[INFO] 10.244.0.17:39087 - 40053 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000169983s
	[INFO] 10.244.0.17:59263 - 3584 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000110151s
	[INFO] 10.244.0.17:59263 - 3289 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000113235s
	[INFO] 10.244.0.17:60318 - 7351 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000068625s
	[INFO] 10.244.0.17:60318 - 6937 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000061843s
	[INFO] 10.244.0.17:47720 - 61486 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000049645s
	[INFO] 10.244.0.17:47720 - 61174 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000051677s
	[INFO] 10.244.0.17:45692 - 4644 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00005919s
	[INFO] 10.244.0.17:45692 - 4913 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000095058s
	[INFO] 10.244.0.17:52775 - 53214 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00007659s
	[INFO] 10.244.0.17:52775 - 52758 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000130182s
	[INFO] 10.244.0.22:46959 - 5024 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000157891s
	[INFO] 10.244.0.22:56312 - 49113 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000219022s
	[INFO] 10.244.0.22:35330 - 31028 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000153967s
	[INFO] 10.244.0.22:38766 - 59509 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000171753s
	[INFO] 10.244.0.22:35896 - 17651 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000124083s
	[INFO] 10.244.0.22:60594 - 57330 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000212699s
	[INFO] 10.244.0.22:46058 - 51217 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004393543s
	[INFO] 10.244.0.22:38674 - 18855 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.00542322s
	[INFO] 10.244.0.22:57292 - 51412 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005239056s
	[INFO] 10.244.0.22:44793 - 2544 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006119828s
	[INFO] 10.244.0.22:50014 - 52201 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003939884s
	[INFO] 10.244.0.22:34804 - 7486 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005158845s
	[INFO] 10.244.0.22:54900 - 64521 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00199762s
	[INFO] 10.244.0.22:43937 - 25315 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002283867s
	
	
	==> describe nodes <==
	Name:               addons-401977
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-401977
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=addons-401977
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_05_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-401977
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-401977"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:05:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-401977
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:06:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:06:51 +0000   Wed, 17 Dec 2025 00:05:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:06:51 +0000   Wed, 17 Dec 2025 00:05:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:06:51 +0000   Wed, 17 Dec 2025 00:05:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 00:06:51 +0000   Wed, 17 Dec 2025 00:06:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-401977
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                02cdd826-ad7f-4fd9-ac65-a0cc01c6f3f3
	  Boot ID:                    0e9cedc6-c46e-4354-b3d2-9272a8b33ae5
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  default                     cloud-spanner-emulator-5bdddb765-z68hl       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  gadget                      gadget-8kklk                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  gcp-auth                    gcp-auth-78565c9fb4-gbjdx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-2ntv7    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         88s
	  kube-system                 amd-gpu-device-plugin-zhxtw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 coredns-66bc5c9577-pqbbw                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     90s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 csi-hostpathplugin-bc4sr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 etcd-addons-401977                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         97s
	  kube-system                 kindnet-h5jgb                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      90s
	  kube-system                 kube-apiserver-addons-401977                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-controller-manager-addons-401977        200m (2%)     0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-proxy-rgd8j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-scheduler-addons-401977                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 metrics-server-85b7d694d7-krz87              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         89s
	  kube-system                 nvidia-device-plugin-daemonset-xk8ql         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 registry-6b586f9694-z62qp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 registry-creds-764b6fb674-5ddkb              0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 registry-proxy-58fhj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 snapshot-controller-7d9fbc56b8-bs9sb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 snapshot-controller-7d9fbc56b8-tqvm8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  local-path-storage          local-path-provisioner-648f6765c9-pxnl9      0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-m4h6g               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 88s   kube-proxy       
	  Normal  Starting                 96s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  95s   kubelet          Node addons-401977 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s   kubelet          Node addons-401977 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     95s   kubelet          Node addons-401977 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           91s   node-controller  Node addons-401977 event: Registered Node addons-401977 in Controller
	  Normal  NodeReady                49s   kubelet          Node addons-401977 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec16 23:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001891] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.373937] i8042: Warning: Keylock active
	[  +0.013287] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.483263] block sda: the capability attribute has been deprecated.
	[  +0.089382] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024236] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.864694] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [a9fb6926bb935bb35c23586c7a59d3ecc32fdac56e6508767e75f4b3b5db4340] <==
	{"level":"warn","ts":"2025-12-17T00:05:17.117815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.125867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.134302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.142107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.156065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.160769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.167426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.175029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.182425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.188716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.195786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.203872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.210939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.218227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.225251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.241134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.247717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.254720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:17.306961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:27.845975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:27.852167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:54.708690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:54.715226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:54.730530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:05:54.736778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54434","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [208d6abcaa9c4d4908630c30283b4b76225353c0a6f9a3858458a258bc072371] <==
	2025/12/17 00:06:40 GCP Auth Webhook started!
	2025/12/17 00:06:46 Ready to marshal response ...
	2025/12/17 00:06:46 Ready to write response ...
	2025/12/17 00:06:46 Ready to marshal response ...
	2025/12/17 00:06:46 Ready to write response ...
	2025/12/17 00:06:46 Ready to marshal response ...
	2025/12/17 00:06:46 Ready to write response ...
	
	
	==> kernel <==
	 00:06:55 up 49 min,  0 user,  load average: 2.52, 1.13, 0.43
	Linux addons-401977 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [840301d1bb594051e430e85719c2707ed97013c7e3269f84012213ab768d9935] <==
	I1217 00:05:26.092860       1 main.go:148] setting mtu 1500 for CNI 
	I1217 00:05:26.092879       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 00:05:26.092903       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T00:05:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 00:05:26.399513       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 00:05:26.399814       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 00:05:26.399941       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 00:05:26.400177       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1217 00:05:56.400246       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1217 00:05:56.400246       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1217 00:05:56.404566       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1217 00:05:56.404657       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1217 00:05:58.001305       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 00:05:58.001328       1 metrics.go:72] Registering metrics
	I1217 00:05:58.001394       1 controller.go:711] "Syncing nftables rules"
	I1217 00:06:06.404194       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:06:06.404249       1 main.go:301] handling current node
	I1217 00:06:16.400219       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:06:16.400258       1 main.go:301] handling current node
	I1217 00:06:26.399798       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:06:26.399832       1 main.go:301] handling current node
	I1217 00:06:36.400284       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:06:36.400328       1 main.go:301] handling current node
	I1217 00:06:46.400247       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:06:46.400274       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a5dab92a052f84df44c207e5dd5c238be41faadb22b92368fa8135c2af2fd265] <==
	W1217 00:06:11.089505       1 handler_proxy.go:99] no RequestInfo found in the context
	E1217 00:06:11.089587       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1217 00:06:11.089698       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.69.233:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.69.233:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.69.233:443: connect: connection refused" logger="UnhandledError"
	E1217 00:06:11.091561       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.69.233:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.69.233:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.69.233:443: connect: connection refused" logger="UnhandledError"
	W1217 00:06:12.090510       1 handler_proxy.go:99] no RequestInfo found in the context
	E1217 00:06:12.090545       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1217 00:06:12.090558       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1217 00:06:12.090593       1 handler_proxy.go:99] no RequestInfo found in the context
	E1217 00:06:12.090657       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1217 00:06:12.091767       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1217 00:06:16.103225       1 handler_proxy.go:99] no RequestInfo found in the context
	E1217 00:06:16.103276       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1217 00:06:16.103332       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.69.233:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.69.233:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1217 00:06:16.117012       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1217 00:06:53.864145       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39138: use of closed network connection
	E1217 00:06:54.003628       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39154: use of closed network connection
	
	
	==> kube-controller-manager [f55d4645a3da61635311b8471dafe61926de73f6c6575bcdc112a086cfde666a] <==
	I1217 00:05:24.694965       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 00:05:24.694979       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1217 00:05:24.695051       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 00:05:24.695056       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 00:05:24.695055       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 00:05:24.695577       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 00:05:24.696902       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 00:05:24.696930       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 00:05:24.699155       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 00:05:24.702105       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1217 00:05:24.702163       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1217 00:05:24.702192       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 00:05:24.702199       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 00:05:24.702204       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 00:05:24.703235       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 00:05:24.707850       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-401977" podCIDRs=["10.244.0.0/24"]
	I1217 00:05:24.713832       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1217 00:05:54.703218       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1217 00:05:54.703387       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1217 00:05:54.703444       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1217 00:05:54.720901       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1217 00:05:54.724404       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1217 00:05:54.803905       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 00:05:54.825100       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 00:06:09.655523       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [950dc7c477829d5fc62b7e10ed2edf92016de18feb8bc6c8d8262fbf28097b78] <==
	I1217 00:05:25.987933       1 server_linux.go:53] "Using iptables proxy"
	I1217 00:05:26.237322       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 00:05:26.340080       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 00:05:26.341801       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1217 00:05:26.341951       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 00:05:26.620331       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 00:05:26.620449       1 server_linux.go:132] "Using iptables Proxier"
	I1217 00:05:26.741131       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 00:05:26.751538       1 server.go:527] "Version info" version="v1.34.2"
	I1217 00:05:26.751683       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:05:26.759899       1 config.go:200] "Starting service config controller"
	I1217 00:05:26.760020       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 00:05:26.760380       1 config.go:106] "Starting endpoint slice config controller"
	I1217 00:05:26.760462       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 00:05:26.760890       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 00:05:26.761185       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 00:05:26.761771       1 config.go:309] "Starting node config controller"
	I1217 00:05:26.762040       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 00:05:26.762086       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 00:05:26.861220       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 00:05:26.861297       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 00:05:26.862380       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c85efeb6af746eaf16f9b1ef2458c5065693555ece6e3b595a07ccc7b8c2e6d9] <==
	I1217 00:05:18.235429       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:05:18.237477       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:05:18.237510       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:05:18.237712       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 00:05:18.237739       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1217 00:05:18.239147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1217 00:05:18.240209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 00:05:18.242207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 00:05:18.242272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 00:05:18.242314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 00:05:18.242571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 00:05:18.242634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 00:05:18.242607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 00:05:18.242661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 00:05:18.242665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 00:05:18.242749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 00:05:18.242756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 00:05:18.242835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 00:05:18.242842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 00:05:18.242856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 00:05:18.242893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 00:05:18.242904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 00:05:18.242923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 00:05:18.242946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1217 00:05:19.438460       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 00:06:27 addons-401977 kubelet[1295]: I1217 00:06:27.150697    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-zhxtw" podStartSLOduration=2.105761782 podStartE2EDuration="21.1506768s" podCreationTimestamp="2025-12-17 00:06:06 +0000 UTC" firstStartedPulling="2025-12-17 00:06:07.291101309 +0000 UTC m=+47.438035055" lastFinishedPulling="2025-12-17 00:06:26.336016317 +0000 UTC m=+66.482950073" observedRunningTime="2025-12-17 00:06:27.150072542 +0000 UTC m=+67.297006306" watchObservedRunningTime="2025-12-17 00:06:27.1506768 +0000 UTC m=+67.297610562"
	Dec 17 00:06:27 addons-401977 kubelet[1295]: I1217 00:06:27.158826    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/snapshot-controller-7d9fbc56b8-bs9sb" podStartSLOduration=40.932643999 podStartE2EDuration="1m0.158804271s" podCreationTimestamp="2025-12-17 00:05:27 +0000 UTC" firstStartedPulling="2025-12-17 00:06:07.291152244 +0000 UTC m=+47.438085986" lastFinishedPulling="2025-12-17 00:06:26.517312493 +0000 UTC m=+66.664246258" observedRunningTime="2025-12-17 00:06:27.158291106 +0000 UTC m=+67.305224869" watchObservedRunningTime="2025-12-17 00:06:27.158804271 +0000 UTC m=+67.305738032"
	Dec 17 00:06:27 addons-401977 kubelet[1295]: I1217 00:06:27.343746    1295 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hl9l6\" (UniqueName: \"kubernetes.io/projected/21cd60af-c71d-4ec6-9c0c-fa3bef2c0e95-kube-api-access-hl9l6\") pod \"21cd60af-c71d-4ec6-9c0c-fa3bef2c0e95\" (UID: \"21cd60af-c71d-4ec6-9c0c-fa3bef2c0e95\") "
	Dec 17 00:06:27 addons-401977 kubelet[1295]: I1217 00:06:27.346307    1295 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21cd60af-c71d-4ec6-9c0c-fa3bef2c0e95-kube-api-access-hl9l6" (OuterVolumeSpecName: "kube-api-access-hl9l6") pod "21cd60af-c71d-4ec6-9c0c-fa3bef2c0e95" (UID: "21cd60af-c71d-4ec6-9c0c-fa3bef2c0e95"). InnerVolumeSpecName "kube-api-access-hl9l6". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 17 00:06:27 addons-401977 kubelet[1295]: I1217 00:06:27.445458    1295 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hl9l6\" (UniqueName: \"kubernetes.io/projected/21cd60af-c71d-4ec6-9c0c-fa3bef2c0e95-kube-api-access-hl9l6\") on node \"addons-401977\" DevicePath \"\""
	Dec 17 00:06:28 addons-401977 kubelet[1295]: I1217 00:06:28.145608    1295 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="313ae24dd8dd0a9cc9a52d2c776c2aea6d8a7eea66a7c4a55ac34e50f42b6cad"
	Dec 17 00:06:28 addons-401977 kubelet[1295]: I1217 00:06:28.145937    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-zhxtw" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 00:06:29 addons-401977 kubelet[1295]: I1217 00:06:29.150502    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-xk8ql" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 00:06:29 addons-401977 kubelet[1295]: I1217 00:06:29.158750    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-xk8ql" podStartSLOduration=1.4743681739999999 podStartE2EDuration="23.158731705s" podCreationTimestamp="2025-12-17 00:06:06 +0000 UTC" firstStartedPulling="2025-12-17 00:06:07.296106084 +0000 UTC m=+47.443039826" lastFinishedPulling="2025-12-17 00:06:28.980469611 +0000 UTC m=+69.127403357" observedRunningTime="2025-12-17 00:06:29.158185112 +0000 UTC m=+69.305118875" watchObservedRunningTime="2025-12-17 00:06:29.158731705 +0000 UTC m=+69.305665467"
	Dec 17 00:06:30 addons-401977 kubelet[1295]: I1217 00:06:30.156135    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-xk8ql" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 00:06:32 addons-401977 kubelet[1295]: I1217 00:06:32.165823    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-58fhj" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 00:06:32 addons-401977 kubelet[1295]: I1217 00:06:32.182203    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-58fhj" podStartSLOduration=2.244367785 podStartE2EDuration="26.18218503s" podCreationTimestamp="2025-12-17 00:06:06 +0000 UTC" firstStartedPulling="2025-12-17 00:06:07.375160904 +0000 UTC m=+47.522094658" lastFinishedPulling="2025-12-17 00:06:31.312978158 +0000 UTC m=+71.459911903" observedRunningTime="2025-12-17 00:06:32.182010774 +0000 UTC m=+72.328944535" watchObservedRunningTime="2025-12-17 00:06:32.18218503 +0000 UTC m=+72.329118793"
	Dec 17 00:06:33 addons-401977 kubelet[1295]: I1217 00:06:33.172744    1295 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-58fhj" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 00:06:38 addons-401977 kubelet[1295]: I1217 00:06:38.207691    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-2ntv7" podStartSLOduration=58.96641179 podStartE2EDuration="1m11.20767189s" podCreationTimestamp="2025-12-17 00:05:27 +0000 UTC" firstStartedPulling="2025-12-17 00:06:22.821692054 +0000 UTC m=+62.968625799" lastFinishedPulling="2025-12-17 00:06:35.062952153 +0000 UTC m=+75.209885899" observedRunningTime="2025-12-17 00:06:35.193827304 +0000 UTC m=+75.340761088" watchObservedRunningTime="2025-12-17 00:06:38.20767189 +0000 UTC m=+78.354605652"
	Dec 17 00:06:38 addons-401977 kubelet[1295]: I1217 00:06:38.208142    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-8kklk" podStartSLOduration=65.359860802 podStartE2EDuration="1m12.208132706s" podCreationTimestamp="2025-12-17 00:05:26 +0000 UTC" firstStartedPulling="2025-12-17 00:06:31.291247282 +0000 UTC m=+71.438181026" lastFinishedPulling="2025-12-17 00:06:38.139519182 +0000 UTC m=+78.286452930" observedRunningTime="2025-12-17 00:06:38.207017581 +0000 UTC m=+78.353951338" watchObservedRunningTime="2025-12-17 00:06:38.208132706 +0000 UTC m=+78.355066469"
	Dec 17 00:06:38 addons-401977 kubelet[1295]: E1217 00:06:38.735624    1295 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 17 00:06:38 addons-401977 kubelet[1295]: E1217 00:06:38.735734    1295 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99c64f75-ab3b-49e5-b5d9-f425e95c71c1-gcr-creds podName:99c64f75-ab3b-49e5-b5d9-f425e95c71c1 nodeName:}" failed. No retries permitted until 2025-12-17 00:07:10.73571267 +0000 UTC m=+110.882646434 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/99c64f75-ab3b-49e5-b5d9-f425e95c71c1-gcr-creds") pod "registry-creds-764b6fb674-5ddkb" (UID: "99c64f75-ab3b-49e5-b5d9-f425e95c71c1") : secret "registry-creds-gcr" not found
	Dec 17 00:06:39 addons-401977 kubelet[1295]: I1217 00:06:39.989408    1295 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 17 00:06:39 addons-401977 kubelet[1295]: I1217 00:06:39.989453    1295 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 17 00:06:41 addons-401977 kubelet[1295]: I1217 00:06:41.337137    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-gbjdx" podStartSLOduration=66.800791423 podStartE2EDuration="1m8.337114578s" podCreationTimestamp="2025-12-17 00:05:33 +0000 UTC" firstStartedPulling="2025-12-17 00:06:39.026071135 +0000 UTC m=+79.173004880" lastFinishedPulling="2025-12-17 00:06:40.562394289 +0000 UTC m=+80.709328035" observedRunningTime="2025-12-17 00:06:41.226853757 +0000 UTC m=+81.373787520" watchObservedRunningTime="2025-12-17 00:06:41.337114578 +0000 UTC m=+81.484048341"
	Dec 17 00:06:44 addons-401977 kubelet[1295]: I1217 00:06:44.246588    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-bc4sr" podStartSLOduration=2.073487779 podStartE2EDuration="38.246570926s" podCreationTimestamp="2025-12-17 00:06:06 +0000 UTC" firstStartedPulling="2025-12-17 00:06:07.297214538 +0000 UTC m=+47.444148280" lastFinishedPulling="2025-12-17 00:06:43.470297672 +0000 UTC m=+83.617231427" observedRunningTime="2025-12-17 00:06:44.245170028 +0000 UTC m=+84.392103814" watchObservedRunningTime="2025-12-17 00:06:44.246570926 +0000 UTC m=+84.393504689"
	Dec 17 00:06:46 addons-401977 kubelet[1295]: I1217 00:06:46.797783    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0d3a07bd-259f-4827-8d61-b6a0453c30dc-gcp-creds\") pod \"busybox\" (UID: \"0d3a07bd-259f-4827-8d61-b6a0453c30dc\") " pod="default/busybox"
	Dec 17 00:06:46 addons-401977 kubelet[1295]: I1217 00:06:46.797827    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h76dw\" (UniqueName: \"kubernetes.io/projected/0d3a07bd-259f-4827-8d61-b6a0453c30dc-kube-api-access-h76dw\") pod \"busybox\" (UID: \"0d3a07bd-259f-4827-8d61-b6a0453c30dc\") " pod="default/busybox"
	Dec 17 00:06:48 addons-401977 kubelet[1295]: I1217 00:06:48.262858    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.679200282 podStartE2EDuration="2.262838992s" podCreationTimestamp="2025-12-17 00:06:46 +0000 UTC" firstStartedPulling="2025-12-17 00:06:47.049366413 +0000 UTC m=+87.196300155" lastFinishedPulling="2025-12-17 00:06:47.633005116 +0000 UTC m=+87.779938865" observedRunningTime="2025-12-17 00:06:48.262079606 +0000 UTC m=+88.409013359" watchObservedRunningTime="2025-12-17 00:06:48.262838992 +0000 UTC m=+88.409772757"
	Dec 17 00:06:53 addons-401977 kubelet[1295]: E1217 00:06:53.864052    1295 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:36464->127.0.0.1:46671: write tcp 127.0.0.1:36464->127.0.0.1:46671: write: broken pipe
	
	
	==> storage-provisioner [383049ced70e6adc7b2ef0d1a415cd527a2670898bfb2873c9b8955afffff3eb] <==
	W1217 00:06:31.360416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:33.363786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:33.367473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:35.371213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:35.375358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:37.379040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:37.384645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:39.387613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:39.392377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:41.396229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:41.401014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:43.404365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:43.461707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:45.464057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:45.467461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:47.469989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:47.473131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:49.475738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:49.479525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:51.482178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:51.486161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:53.488794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:53.493178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:55.496125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:06:55.500017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-401977 -n addons-401977
helpers_test.go:270: (dbg) Run:  kubectl --context addons-401977 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-9xxch ingress-nginx-admission-patch-md92j registry-creds-764b6fb674-5ddkb
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-401977 describe pod ingress-nginx-admission-create-9xxch ingress-nginx-admission-patch-md92j registry-creds-764b6fb674-5ddkb
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-401977 describe pod ingress-nginx-admission-create-9xxch ingress-nginx-admission-patch-md92j registry-creds-764b6fb674-5ddkb: exit status 1 (60.803592ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-9xxch" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-md92j" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-5ddkb" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-401977 describe pod ingress-nginx-admission-create-9xxch ingress-nginx-admission-patch-md92j registry-creds-764b6fb674-5ddkb: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-401977 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-401977 addons disable headlamp --alsologtostderr -v=1: exit status 11 (229.025073ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:06:56.499570   27079 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:06:56.499719   27079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:06:56.499729   27079 out.go:374] Setting ErrFile to fd 2...
	I1217 00:06:56.499734   27079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:06:56.499919   27079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:06:56.500169   27079 mustload.go:66] Loading cluster: addons-401977
	I1217 00:06:56.500442   27079 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:06:56.500457   27079 addons.go:622] checking whether the cluster is paused
	I1217 00:06:56.500530   27079 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:06:56.500541   27079 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:06:56.500900   27079 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:06:56.518032   27079 ssh_runner.go:195] Run: systemctl --version
	I1217 00:06:56.518079   27079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:06:56.534448   27079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:06:56.624159   27079 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:06:56.624229   27079 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:06:56.651322   27079 cri.go:89] found id: "ad54e02660b07cbde6d493c7d5e3ed172475b94a9eaee4e87d4bd9ef151c0b22"
	I1217 00:06:56.651357   27079 cri.go:89] found id: "45b944b097394c01a6a9f73b7481a21c99e516a6329609ae554a14bb17a1b0c4"
	I1217 00:06:56.651362   27079 cri.go:89] found id: "96a32ce198ee5fea679ca8aa4c1ec23792c97d3131fb348ba7d61057703f8b98"
	I1217 00:06:56.651365   27079 cri.go:89] found id: "6060f042efdda63b0e493f156e033554f81a8024186ecd9b75e89903f49cc5a6"
	I1217 00:06:56.651368   27079 cri.go:89] found id: "cfb270351b7717448c0caad78a981c83e200c3645e7ea23795af66b940e7f694"
	I1217 00:06:56.651372   27079 cri.go:89] found id: "60a76e334b179e66cc6937fdcf120c474c69221436fe9732a138ce177b409c81"
	I1217 00:06:56.651377   27079 cri.go:89] found id: "404a83db71038ead5c1b120d09189bcde64f22a00bc12c772da57ed50d0b4e31"
	I1217 00:06:56.651380   27079 cri.go:89] found id: "7477be1e8e83d8e93db214a41f2cbe2dac4702f15bafed422689a1ad41a282ee"
	I1217 00:06:56.651383   27079 cri.go:89] found id: "88b4569360ba98b30f65b36287d6c38a51676cebffa1785dc5414861aa1a0629"
	I1217 00:06:56.651401   27079 cri.go:89] found id: "7ad73ae76171d2d2105f4ccfe0862424948abc4be7d39a9f3d3999660c222211"
	I1217 00:06:56.651406   27079 cri.go:89] found id: "e77e53ca2e567bf130231e95ef7e993ca42c0bc61aab6ffced345e9e69c005cc"
	I1217 00:06:56.651411   27079 cri.go:89] found id: "d5d932c1082d35b313501328162c7f2f663374a7e3c58c4c2b1114359e9493df"
	I1217 00:06:56.651416   27079 cri.go:89] found id: "5d7bc94a6e7622d199da43ca8e9942b4e849ae76949d0554b9d548a510dd26ce"
	I1217 00:06:56.651423   27079 cri.go:89] found id: "b3c5366ec83c76413d2675b009b0c59e92e94121bf53dd83abb96b2fc0bd58b7"
	I1217 00:06:56.651428   27079 cri.go:89] found id: "dfd9e15edab91358da5ee7de7e20baaf2f8b820f9507af61b26dcbf0be9749ac"
	I1217 00:06:56.651448   27079 cri.go:89] found id: "a380c22257b5cfb547f66e134e244b2c1d6bd55bad431f846b76089ef28f6a89"
	I1217 00:06:56.651458   27079 cri.go:89] found id: "f6e58bb2900bb7013f6f81ccca2250cb1b6547be3edcc33d4bc867ae9d0b4072"
	I1217 00:06:56.651463   27079 cri.go:89] found id: "383049ced70e6adc7b2ef0d1a415cd527a2670898bfb2873c9b8955afffff3eb"
	I1217 00:06:56.651465   27079 cri.go:89] found id: "840301d1bb594051e430e85719c2707ed97013c7e3269f84012213ab768d9935"
	I1217 00:06:56.651468   27079 cri.go:89] found id: "950dc7c477829d5fc62b7e10ed2edf92016de18feb8bc6c8d8262fbf28097b78"
	I1217 00:06:56.651474   27079 cri.go:89] found id: "c85efeb6af746eaf16f9b1ef2458c5065693555ece6e3b595a07ccc7b8c2e6d9"
	I1217 00:06:56.651476   27079 cri.go:89] found id: "f55d4645a3da61635311b8471dafe61926de73f6c6575bcdc112a086cfde666a"
	I1217 00:06:56.651479   27079 cri.go:89] found id: "a9fb6926bb935bb35c23586c7a59d3ecc32fdac56e6508767e75f4b3b5db4340"
	I1217 00:06:56.651482   27079 cri.go:89] found id: "a5dab92a052f84df44c207e5dd5c238be41faadb22b92368fa8135c2af2fd265"
	I1217 00:06:56.651484   27079 cri.go:89] found id: ""
	I1217 00:06:56.651540   27079 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:06:56.665802   27079 out.go:203] 
	W1217 00:06:56.667230   27079 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:06:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:06:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:06:56.667257   27079 out.go:285] * 
	* 
	W1217 00:06:56.670347   27079 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:06:56.671691   27079 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-401977 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.43s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-z68hl" [0d8c2426-768d-4145-b0a1-0d11abf8cf02] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003185689s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-401977 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-401977 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (228.977089ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:07:20.703593   29474 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:07:20.703710   29474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:07:20.703718   29474 out.go:374] Setting ErrFile to fd 2...
	I1217 00:07:20.703722   29474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:07:20.703940   29474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:07:20.704208   29474 mustload.go:66] Loading cluster: addons-401977
	I1217 00:07:20.704512   29474 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:07:20.704530   29474 addons.go:622] checking whether the cluster is paused
	I1217 00:07:20.704607   29474 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:07:20.704619   29474 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:07:20.704964   29474 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:07:20.722437   29474 ssh_runner.go:195] Run: systemctl --version
	I1217 00:07:20.722488   29474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:07:20.739475   29474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:07:20.829954   29474 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:07:20.830054   29474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:07:20.857888   29474 cri.go:89] found id: "7d625ea0827567c1ecf6072d9f705ee4a0e2896e259825605202bb193195e013"
	I1217 00:07:20.857924   29474 cri.go:89] found id: "ad54e02660b07cbde6d493c7d5e3ed172475b94a9eaee4e87d4bd9ef151c0b22"
	I1217 00:07:20.857930   29474 cri.go:89] found id: "45b944b097394c01a6a9f73b7481a21c99e516a6329609ae554a14bb17a1b0c4"
	I1217 00:07:20.857933   29474 cri.go:89] found id: "96a32ce198ee5fea679ca8aa4c1ec23792c97d3131fb348ba7d61057703f8b98"
	I1217 00:07:20.857936   29474 cri.go:89] found id: "6060f042efdda63b0e493f156e033554f81a8024186ecd9b75e89903f49cc5a6"
	I1217 00:07:20.857940   29474 cri.go:89] found id: "cfb270351b7717448c0caad78a981c83e200c3645e7ea23795af66b940e7f694"
	I1217 00:07:20.857943   29474 cri.go:89] found id: "60a76e334b179e66cc6937fdcf120c474c69221436fe9732a138ce177b409c81"
	I1217 00:07:20.857946   29474 cri.go:89] found id: "404a83db71038ead5c1b120d09189bcde64f22a00bc12c772da57ed50d0b4e31"
	I1217 00:07:20.857948   29474 cri.go:89] found id: "7477be1e8e83d8e93db214a41f2cbe2dac4702f15bafed422689a1ad41a282ee"
	I1217 00:07:20.857958   29474 cri.go:89] found id: "88b4569360ba98b30f65b36287d6c38a51676cebffa1785dc5414861aa1a0629"
	I1217 00:07:20.857961   29474 cri.go:89] found id: "7ad73ae76171d2d2105f4ccfe0862424948abc4be7d39a9f3d3999660c222211"
	I1217 00:07:20.857964   29474 cri.go:89] found id: "e77e53ca2e567bf130231e95ef7e993ca42c0bc61aab6ffced345e9e69c005cc"
	I1217 00:07:20.857966   29474 cri.go:89] found id: "d5d932c1082d35b313501328162c7f2f663374a7e3c58c4c2b1114359e9493df"
	I1217 00:07:20.857969   29474 cri.go:89] found id: "5d7bc94a6e7622d199da43ca8e9942b4e849ae76949d0554b9d548a510dd26ce"
	I1217 00:07:20.857972   29474 cri.go:89] found id: "b3c5366ec83c76413d2675b009b0c59e92e94121bf53dd83abb96b2fc0bd58b7"
	I1217 00:07:20.857983   29474 cri.go:89] found id: "dfd9e15edab91358da5ee7de7e20baaf2f8b820f9507af61b26dcbf0be9749ac"
	I1217 00:07:20.858000   29474 cri.go:89] found id: "a380c22257b5cfb547f66e134e244b2c1d6bd55bad431f846b76089ef28f6a89"
	I1217 00:07:20.858005   29474 cri.go:89] found id: "f6e58bb2900bb7013f6f81ccca2250cb1b6547be3edcc33d4bc867ae9d0b4072"
	I1217 00:07:20.858008   29474 cri.go:89] found id: "383049ced70e6adc7b2ef0d1a415cd527a2670898bfb2873c9b8955afffff3eb"
	I1217 00:07:20.858011   29474 cri.go:89] found id: "840301d1bb594051e430e85719c2707ed97013c7e3269f84012213ab768d9935"
	I1217 00:07:20.858017   29474 cri.go:89] found id: "950dc7c477829d5fc62b7e10ed2edf92016de18feb8bc6c8d8262fbf28097b78"
	I1217 00:07:20.858019   29474 cri.go:89] found id: "c85efeb6af746eaf16f9b1ef2458c5065693555ece6e3b595a07ccc7b8c2e6d9"
	I1217 00:07:20.858022   29474 cri.go:89] found id: "f55d4645a3da61635311b8471dafe61926de73f6c6575bcdc112a086cfde666a"
	I1217 00:07:20.858025   29474 cri.go:89] found id: "a9fb6926bb935bb35c23586c7a59d3ecc32fdac56e6508767e75f4b3b5db4340"
	I1217 00:07:20.858027   29474 cri.go:89] found id: "a5dab92a052f84df44c207e5dd5c238be41faadb22b92368fa8135c2af2fd265"
	I1217 00:07:20.858030   29474 cri.go:89] found id: ""
	I1217 00:07:20.858076   29474 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:07:20.871573   29474 out.go:203] 
	W1217 00:07:20.872527   29474 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:07:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:07:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:07:20.872543   29474 out.go:285] * 
	* 
	W1217 00:07:20.875399   29474 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:07:20.876516   29474 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-401977 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.04s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-401977 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-401977 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401977 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [0b7f3f0b-8c4c-4de9-a20f-abb53b1520ae] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [0b7f3f0b-8c4c-4de9-a20f-abb53b1520ae] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [0b7f3f0b-8c4c-4de9-a20f-abb53b1520ae] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003812048s
addons_test.go:969: (dbg) Run:  kubectl --context addons-401977 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-401977 ssh "cat /opt/local-path-provisioner/pvc-9efeedf4-dfa0-403b-ae36-a3e2e8cb966e_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-401977 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-401977 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-401977 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-401977 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (225.756309ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:07:22.866351   29722 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:07:22.866489   29722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:07:22.866498   29722 out.go:374] Setting ErrFile to fd 2...
	I1217 00:07:22.866502   29722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:07:22.866674   29722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:07:22.866911   29722 mustload.go:66] Loading cluster: addons-401977
	I1217 00:07:22.867228   29722 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:07:22.867246   29722 addons.go:622] checking whether the cluster is paused
	I1217 00:07:22.867325   29722 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:07:22.867337   29722 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:07:22.867664   29722 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:07:22.885309   29722 ssh_runner.go:195] Run: systemctl --version
	I1217 00:07:22.885373   29722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:07:22.901228   29722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:07:22.991317   29722 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:07:22.991383   29722 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:07:23.019052   29722 cri.go:89] found id: "7d625ea0827567c1ecf6072d9f705ee4a0e2896e259825605202bb193195e013"
	I1217 00:07:23.019075   29722 cri.go:89] found id: "ad54e02660b07cbde6d493c7d5e3ed172475b94a9eaee4e87d4bd9ef151c0b22"
	I1217 00:07:23.019081   29722 cri.go:89] found id: "45b944b097394c01a6a9f73b7481a21c99e516a6329609ae554a14bb17a1b0c4"
	I1217 00:07:23.019086   29722 cri.go:89] found id: "96a32ce198ee5fea679ca8aa4c1ec23792c97d3131fb348ba7d61057703f8b98"
	I1217 00:07:23.019090   29722 cri.go:89] found id: "6060f042efdda63b0e493f156e033554f81a8024186ecd9b75e89903f49cc5a6"
	I1217 00:07:23.019093   29722 cri.go:89] found id: "cfb270351b7717448c0caad78a981c83e200c3645e7ea23795af66b940e7f694"
	I1217 00:07:23.019097   29722 cri.go:89] found id: "60a76e334b179e66cc6937fdcf120c474c69221436fe9732a138ce177b409c81"
	I1217 00:07:23.019101   29722 cri.go:89] found id: "404a83db71038ead5c1b120d09189bcde64f22a00bc12c772da57ed50d0b4e31"
	I1217 00:07:23.019106   29722 cri.go:89] found id: "7477be1e8e83d8e93db214a41f2cbe2dac4702f15bafed422689a1ad41a282ee"
	I1217 00:07:23.019113   29722 cri.go:89] found id: "88b4569360ba98b30f65b36287d6c38a51676cebffa1785dc5414861aa1a0629"
	I1217 00:07:23.019118   29722 cri.go:89] found id: "7ad73ae76171d2d2105f4ccfe0862424948abc4be7d39a9f3d3999660c222211"
	I1217 00:07:23.019122   29722 cri.go:89] found id: "e77e53ca2e567bf130231e95ef7e993ca42c0bc61aab6ffced345e9e69c005cc"
	I1217 00:07:23.019127   29722 cri.go:89] found id: "d5d932c1082d35b313501328162c7f2f663374a7e3c58c4c2b1114359e9493df"
	I1217 00:07:23.019131   29722 cri.go:89] found id: "5d7bc94a6e7622d199da43ca8e9942b4e849ae76949d0554b9d548a510dd26ce"
	I1217 00:07:23.019135   29722 cri.go:89] found id: "b3c5366ec83c76413d2675b009b0c59e92e94121bf53dd83abb96b2fc0bd58b7"
	I1217 00:07:23.019148   29722 cri.go:89] found id: "dfd9e15edab91358da5ee7de7e20baaf2f8b820f9507af61b26dcbf0be9749ac"
	I1217 00:07:23.019157   29722 cri.go:89] found id: "a380c22257b5cfb547f66e134e244b2c1d6bd55bad431f846b76089ef28f6a89"
	I1217 00:07:23.019163   29722 cri.go:89] found id: "f6e58bb2900bb7013f6f81ccca2250cb1b6547be3edcc33d4bc867ae9d0b4072"
	I1217 00:07:23.019168   29722 cri.go:89] found id: "383049ced70e6adc7b2ef0d1a415cd527a2670898bfb2873c9b8955afffff3eb"
	I1217 00:07:23.019173   29722 cri.go:89] found id: "840301d1bb594051e430e85719c2707ed97013c7e3269f84012213ab768d9935"
	I1217 00:07:23.019182   29722 cri.go:89] found id: "950dc7c477829d5fc62b7e10ed2edf92016de18feb8bc6c8d8262fbf28097b78"
	I1217 00:07:23.019186   29722 cri.go:89] found id: "c85efeb6af746eaf16f9b1ef2458c5065693555ece6e3b595a07ccc7b8c2e6d9"
	I1217 00:07:23.019189   29722 cri.go:89] found id: "f55d4645a3da61635311b8471dafe61926de73f6c6575bcdc112a086cfde666a"
	I1217 00:07:23.019197   29722 cri.go:89] found id: "a9fb6926bb935bb35c23586c7a59d3ecc32fdac56e6508767e75f4b3b5db4340"
	I1217 00:07:23.019208   29722 cri.go:89] found id: "a5dab92a052f84df44c207e5dd5c238be41faadb22b92368fa8135c2af2fd265"
	I1217 00:07:23.019216   29722 cri.go:89] found id: ""
	I1217 00:07:23.019261   29722 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:07:23.032682   29722 out.go:203] 
	W1217 00:07:23.033873   29722 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:07:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:07:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:07:23.033891   29722 out.go:285] * 
	* 
	W1217 00:07:23.036789   29722 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:07:23.037961   29722 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-401977 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.04s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.23s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-xk8ql" [6f8c2cc8-3d77-495a-902d-fc67c36cde4d] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003628004s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-401977 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-401977 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (228.941403ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:07:10.227805   28727 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:07:10.228102   28727 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:07:10.228112   28727 out.go:374] Setting ErrFile to fd 2...
	I1217 00:07:10.228116   28727 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:07:10.228268   28727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:07:10.228489   28727 mustload.go:66] Loading cluster: addons-401977
	I1217 00:07:10.228788   28727 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:07:10.228809   28727 addons.go:622] checking whether the cluster is paused
	I1217 00:07:10.228908   28727 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:07:10.228921   28727 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:07:10.229304   28727 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:07:10.246564   28727 ssh_runner.go:195] Run: systemctl --version
	I1217 00:07:10.246607   28727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:07:10.263546   28727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:07:10.353169   28727 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:07:10.353234   28727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:07:10.382674   28727 cri.go:89] found id: "ad54e02660b07cbde6d493c7d5e3ed172475b94a9eaee4e87d4bd9ef151c0b22"
	I1217 00:07:10.382717   28727 cri.go:89] found id: "45b944b097394c01a6a9f73b7481a21c99e516a6329609ae554a14bb17a1b0c4"
	I1217 00:07:10.382723   28727 cri.go:89] found id: "96a32ce198ee5fea679ca8aa4c1ec23792c97d3131fb348ba7d61057703f8b98"
	I1217 00:07:10.382730   28727 cri.go:89] found id: "6060f042efdda63b0e493f156e033554f81a8024186ecd9b75e89903f49cc5a6"
	I1217 00:07:10.382735   28727 cri.go:89] found id: "cfb270351b7717448c0caad78a981c83e200c3645e7ea23795af66b940e7f694"
	I1217 00:07:10.382742   28727 cri.go:89] found id: "60a76e334b179e66cc6937fdcf120c474c69221436fe9732a138ce177b409c81"
	I1217 00:07:10.382746   28727 cri.go:89] found id: "404a83db71038ead5c1b120d09189bcde64f22a00bc12c772da57ed50d0b4e31"
	I1217 00:07:10.382752   28727 cri.go:89] found id: "7477be1e8e83d8e93db214a41f2cbe2dac4702f15bafed422689a1ad41a282ee"
	I1217 00:07:10.382756   28727 cri.go:89] found id: "88b4569360ba98b30f65b36287d6c38a51676cebffa1785dc5414861aa1a0629"
	I1217 00:07:10.382771   28727 cri.go:89] found id: "7ad73ae76171d2d2105f4ccfe0862424948abc4be7d39a9f3d3999660c222211"
	I1217 00:07:10.382776   28727 cri.go:89] found id: "e77e53ca2e567bf130231e95ef7e993ca42c0bc61aab6ffced345e9e69c005cc"
	I1217 00:07:10.382780   28727 cri.go:89] found id: "d5d932c1082d35b313501328162c7f2f663374a7e3c58c4c2b1114359e9493df"
	I1217 00:07:10.382785   28727 cri.go:89] found id: "5d7bc94a6e7622d199da43ca8e9942b4e849ae76949d0554b9d548a510dd26ce"
	I1217 00:07:10.382789   28727 cri.go:89] found id: "b3c5366ec83c76413d2675b009b0c59e92e94121bf53dd83abb96b2fc0bd58b7"
	I1217 00:07:10.382793   28727 cri.go:89] found id: "dfd9e15edab91358da5ee7de7e20baaf2f8b820f9507af61b26dcbf0be9749ac"
	I1217 00:07:10.382809   28727 cri.go:89] found id: "a380c22257b5cfb547f66e134e244b2c1d6bd55bad431f846b76089ef28f6a89"
	I1217 00:07:10.382817   28727 cri.go:89] found id: "f6e58bb2900bb7013f6f81ccca2250cb1b6547be3edcc33d4bc867ae9d0b4072"
	I1217 00:07:10.382823   28727 cri.go:89] found id: "383049ced70e6adc7b2ef0d1a415cd527a2670898bfb2873c9b8955afffff3eb"
	I1217 00:07:10.382828   28727 cri.go:89] found id: "840301d1bb594051e430e85719c2707ed97013c7e3269f84012213ab768d9935"
	I1217 00:07:10.382832   28727 cri.go:89] found id: "950dc7c477829d5fc62b7e10ed2edf92016de18feb8bc6c8d8262fbf28097b78"
	I1217 00:07:10.382835   28727 cri.go:89] found id: "c85efeb6af746eaf16f9b1ef2458c5065693555ece6e3b595a07ccc7b8c2e6d9"
	I1217 00:07:10.382839   28727 cri.go:89] found id: "f55d4645a3da61635311b8471dafe61926de73f6c6575bcdc112a086cfde666a"
	I1217 00:07:10.382843   28727 cri.go:89] found id: "a9fb6926bb935bb35c23586c7a59d3ecc32fdac56e6508767e75f4b3b5db4340"
	I1217 00:07:10.382846   28727 cri.go:89] found id: "a5dab92a052f84df44c207e5dd5c238be41faadb22b92368fa8135c2af2fd265"
	I1217 00:07:10.382850   28727 cri.go:89] found id: ""
	I1217 00:07:10.382908   28727 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:07:10.396689   28727 out.go:203] 
	W1217 00:07:10.397872   28727 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:07:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:07:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:07:10.397891   28727 out.go:285] * 
	* 
	W1217 00:07:10.401191   28727 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:07:10.402360   28727 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-401977 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.23s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-m4h6g" [7787b49f-3256-4fcf-980c-a58328b8e999] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003653032s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-401977 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-401977 addons disable yakd --alsologtostderr -v=1: exit status 11 (231.04649ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:07:15.461559   29155 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:07:15.461821   29155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:07:15.461831   29155 out.go:374] Setting ErrFile to fd 2...
	I1217 00:07:15.461835   29155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:07:15.462019   29155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:07:15.462266   29155 mustload.go:66] Loading cluster: addons-401977
	I1217 00:07:15.462556   29155 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:07:15.462573   29155 addons.go:622] checking whether the cluster is paused
	I1217 00:07:15.462658   29155 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:07:15.462670   29155 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:07:15.463012   29155 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:07:15.481618   29155 ssh_runner.go:195] Run: systemctl --version
	I1217 00:07:15.481669   29155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:07:15.498833   29155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:07:15.589517   29155 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:07:15.589596   29155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:07:15.619050   29155 cri.go:89] found id: "7d625ea0827567c1ecf6072d9f705ee4a0e2896e259825605202bb193195e013"
	I1217 00:07:15.619076   29155 cri.go:89] found id: "ad54e02660b07cbde6d493c7d5e3ed172475b94a9eaee4e87d4bd9ef151c0b22"
	I1217 00:07:15.619081   29155 cri.go:89] found id: "45b944b097394c01a6a9f73b7481a21c99e516a6329609ae554a14bb17a1b0c4"
	I1217 00:07:15.619084   29155 cri.go:89] found id: "96a32ce198ee5fea679ca8aa4c1ec23792c97d3131fb348ba7d61057703f8b98"
	I1217 00:07:15.619088   29155 cri.go:89] found id: "6060f042efdda63b0e493f156e033554f81a8024186ecd9b75e89903f49cc5a6"
	I1217 00:07:15.619092   29155 cri.go:89] found id: "cfb270351b7717448c0caad78a981c83e200c3645e7ea23795af66b940e7f694"
	I1217 00:07:15.619094   29155 cri.go:89] found id: "60a76e334b179e66cc6937fdcf120c474c69221436fe9732a138ce177b409c81"
	I1217 00:07:15.619097   29155 cri.go:89] found id: "404a83db71038ead5c1b120d09189bcde64f22a00bc12c772da57ed50d0b4e31"
	I1217 00:07:15.619099   29155 cri.go:89] found id: "7477be1e8e83d8e93db214a41f2cbe2dac4702f15bafed422689a1ad41a282ee"
	I1217 00:07:15.619105   29155 cri.go:89] found id: "88b4569360ba98b30f65b36287d6c38a51676cebffa1785dc5414861aa1a0629"
	I1217 00:07:15.619108   29155 cri.go:89] found id: "7ad73ae76171d2d2105f4ccfe0862424948abc4be7d39a9f3d3999660c222211"
	I1217 00:07:15.619111   29155 cri.go:89] found id: "e77e53ca2e567bf130231e95ef7e993ca42c0bc61aab6ffced345e9e69c005cc"
	I1217 00:07:15.619114   29155 cri.go:89] found id: "d5d932c1082d35b313501328162c7f2f663374a7e3c58c4c2b1114359e9493df"
	I1217 00:07:15.619117   29155 cri.go:89] found id: "5d7bc94a6e7622d199da43ca8e9942b4e849ae76949d0554b9d548a510dd26ce"
	I1217 00:07:15.619120   29155 cri.go:89] found id: "b3c5366ec83c76413d2675b009b0c59e92e94121bf53dd83abb96b2fc0bd58b7"
	I1217 00:07:15.619124   29155 cri.go:89] found id: "dfd9e15edab91358da5ee7de7e20baaf2f8b820f9507af61b26dcbf0be9749ac"
	I1217 00:07:15.619127   29155 cri.go:89] found id: "a380c22257b5cfb547f66e134e244b2c1d6bd55bad431f846b76089ef28f6a89"
	I1217 00:07:15.619132   29155 cri.go:89] found id: "f6e58bb2900bb7013f6f81ccca2250cb1b6547be3edcc33d4bc867ae9d0b4072"
	I1217 00:07:15.619134   29155 cri.go:89] found id: "383049ced70e6adc7b2ef0d1a415cd527a2670898bfb2873c9b8955afffff3eb"
	I1217 00:07:15.619137   29155 cri.go:89] found id: "840301d1bb594051e430e85719c2707ed97013c7e3269f84012213ab768d9935"
	I1217 00:07:15.619140   29155 cri.go:89] found id: "950dc7c477829d5fc62b7e10ed2edf92016de18feb8bc6c8d8262fbf28097b78"
	I1217 00:07:15.619143   29155 cri.go:89] found id: "c85efeb6af746eaf16f9b1ef2458c5065693555ece6e3b595a07ccc7b8c2e6d9"
	I1217 00:07:15.619145   29155 cri.go:89] found id: "f55d4645a3da61635311b8471dafe61926de73f6c6575bcdc112a086cfde666a"
	I1217 00:07:15.619148   29155 cri.go:89] found id: "a9fb6926bb935bb35c23586c7a59d3ecc32fdac56e6508767e75f4b3b5db4340"
	I1217 00:07:15.619155   29155 cri.go:89] found id: "a5dab92a052f84df44c207e5dd5c238be41faadb22b92368fa8135c2af2fd265"
	I1217 00:07:15.619158   29155 cri.go:89] found id: ""
	I1217 00:07:15.619203   29155 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:07:15.633033   29155 out.go:203] 
	W1217 00:07:15.634120   29155 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:07:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:07:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:07:15.634142   29155 out.go:285] * 
	* 
	W1217 00:07:15.636976   29155 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:07:15.638330   29155 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-401977 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.24s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-zhxtw" [39b7820e-9767-4f89-a35e-e8e970dc8ced] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003946429s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-401977 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-401977 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (233.764981ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:07:12.820549   28946 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:07:12.820721   28946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:07:12.820733   28946 out.go:374] Setting ErrFile to fd 2...
	I1217 00:07:12.820738   28946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:07:12.820937   28946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:07:12.821226   28946 mustload.go:66] Loading cluster: addons-401977
	I1217 00:07:12.821544   28946 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:07:12.821563   28946 addons.go:622] checking whether the cluster is paused
	I1217 00:07:12.821646   28946 config.go:182] Loaded profile config "addons-401977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:07:12.821664   28946 host.go:66] Checking if "addons-401977" exists ...
	I1217 00:07:12.822074   28946 cli_runner.go:164] Run: docker container inspect addons-401977 --format={{.State.Status}}
	I1217 00:07:12.839473   28946 ssh_runner.go:195] Run: systemctl --version
	I1217 00:07:12.839555   28946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-401977
	I1217 00:07:12.856672   28946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/addons-401977/id_rsa Username:docker}
	I1217 00:07:12.947163   28946 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:07:12.947455   28946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:07:12.976511   28946 cri.go:89] found id: "7d625ea0827567c1ecf6072d9f705ee4a0e2896e259825605202bb193195e013"
	I1217 00:07:12.976532   28946 cri.go:89] found id: "ad54e02660b07cbde6d493c7d5e3ed172475b94a9eaee4e87d4bd9ef151c0b22"
	I1217 00:07:12.976536   28946 cri.go:89] found id: "45b944b097394c01a6a9f73b7481a21c99e516a6329609ae554a14bb17a1b0c4"
	I1217 00:07:12.976539   28946 cri.go:89] found id: "96a32ce198ee5fea679ca8aa4c1ec23792c97d3131fb348ba7d61057703f8b98"
	I1217 00:07:12.976542   28946 cri.go:89] found id: "6060f042efdda63b0e493f156e033554f81a8024186ecd9b75e89903f49cc5a6"
	I1217 00:07:12.976545   28946 cri.go:89] found id: "cfb270351b7717448c0caad78a981c83e200c3645e7ea23795af66b940e7f694"
	I1217 00:07:12.976548   28946 cri.go:89] found id: "60a76e334b179e66cc6937fdcf120c474c69221436fe9732a138ce177b409c81"
	I1217 00:07:12.976551   28946 cri.go:89] found id: "404a83db71038ead5c1b120d09189bcde64f22a00bc12c772da57ed50d0b4e31"
	I1217 00:07:12.976554   28946 cri.go:89] found id: "7477be1e8e83d8e93db214a41f2cbe2dac4702f15bafed422689a1ad41a282ee"
	I1217 00:07:12.976559   28946 cri.go:89] found id: "88b4569360ba98b30f65b36287d6c38a51676cebffa1785dc5414861aa1a0629"
	I1217 00:07:12.976561   28946 cri.go:89] found id: "7ad73ae76171d2d2105f4ccfe0862424948abc4be7d39a9f3d3999660c222211"
	I1217 00:07:12.976564   28946 cri.go:89] found id: "e77e53ca2e567bf130231e95ef7e993ca42c0bc61aab6ffced345e9e69c005cc"
	I1217 00:07:12.976567   28946 cri.go:89] found id: "d5d932c1082d35b313501328162c7f2f663374a7e3c58c4c2b1114359e9493df"
	I1217 00:07:12.976569   28946 cri.go:89] found id: "5d7bc94a6e7622d199da43ca8e9942b4e849ae76949d0554b9d548a510dd26ce"
	I1217 00:07:12.976572   28946 cri.go:89] found id: "b3c5366ec83c76413d2675b009b0c59e92e94121bf53dd83abb96b2fc0bd58b7"
	I1217 00:07:12.976577   28946 cri.go:89] found id: "dfd9e15edab91358da5ee7de7e20baaf2f8b820f9507af61b26dcbf0be9749ac"
	I1217 00:07:12.976579   28946 cri.go:89] found id: "a380c22257b5cfb547f66e134e244b2c1d6bd55bad431f846b76089ef28f6a89"
	I1217 00:07:12.976583   28946 cri.go:89] found id: "f6e58bb2900bb7013f6f81ccca2250cb1b6547be3edcc33d4bc867ae9d0b4072"
	I1217 00:07:12.976586   28946 cri.go:89] found id: "383049ced70e6adc7b2ef0d1a415cd527a2670898bfb2873c9b8955afffff3eb"
	I1217 00:07:12.976588   28946 cri.go:89] found id: "840301d1bb594051e430e85719c2707ed97013c7e3269f84012213ab768d9935"
	I1217 00:07:12.976593   28946 cri.go:89] found id: "950dc7c477829d5fc62b7e10ed2edf92016de18feb8bc6c8d8262fbf28097b78"
	I1217 00:07:12.976596   28946 cri.go:89] found id: "c85efeb6af746eaf16f9b1ef2458c5065693555ece6e3b595a07ccc7b8c2e6d9"
	I1217 00:07:12.976598   28946 cri.go:89] found id: "f55d4645a3da61635311b8471dafe61926de73f6c6575bcdc112a086cfde666a"
	I1217 00:07:12.976601   28946 cri.go:89] found id: "a9fb6926bb935bb35c23586c7a59d3ecc32fdac56e6508767e75f4b3b5db4340"
	I1217 00:07:12.976604   28946 cri.go:89] found id: "a5dab92a052f84df44c207e5dd5c238be41faadb22b92368fa8135c2af2fd265"
	I1217 00:07:12.976606   28946 cri.go:89] found id: ""
	I1217 00:07:12.976653   28946 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:07:12.990033   28946 out.go:203] 
	W1217 00:07:12.991059   28946 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:07:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:07:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:07:12.991075   28946 out.go:285] * 
	* 
	W1217 00:07:12.993872   28946 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:07:12.995138   28946 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-401977 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.24s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.13s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-664600 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-664600 --output=json --user=testUser: exit status 80 (2.128144588s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"eb9a22ea-6611-444c-9d11-3f344144e628","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-664600 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"3ac0d184-6fdf-47eb-bf6c-9a963817e534","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-17T00:24:41Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"45e10b75-8808-4599-9a39-f207867df239","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-664600 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.13s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.87s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-664600 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-664600 --output=json --user=testUser: exit status 80 (1.865186426s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f3f37c96-3602-4e34-beec-25ddd9519a5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-664600 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"903f8b94-732e-4753-b3c3-1549f25e6763","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-17T00:24:43Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"e3660412-83ec-4aec-be88-094d7dfcd5d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-664600 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.87s)

                                                
                                    
x
+
TestPause/serial/Pause (5.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-004564 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-004564 --alsologtostderr -v=5: exit status 80 (2.426069713s)

                                                
                                                
-- stdout --
	* Pausing node pause-004564 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:37:24.216352  216682 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:37:24.216474  216682 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:37:24.216484  216682 out.go:374] Setting ErrFile to fd 2...
	I1217 00:37:24.216488  216682 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:37:24.216712  216682 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:37:24.217023  216682 out.go:368] Setting JSON to false
	I1217 00:37:24.217047  216682 mustload.go:66] Loading cluster: pause-004564
	I1217 00:37:24.217490  216682 config.go:182] Loaded profile config "pause-004564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:37:24.217968  216682 cli_runner.go:164] Run: docker container inspect pause-004564 --format={{.State.Status}}
	I1217 00:37:24.236933  216682 host.go:66] Checking if "pause-004564" exists ...
	I1217 00:37:24.237557  216682 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:37:24.293472  216682 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 00:37:24.283613932 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:37:24.294115  216682 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-004564 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 00:37:24.295900  216682 out.go:179] * Pausing node pause-004564 ... 
	I1217 00:37:24.297042  216682 host.go:66] Checking if "pause-004564" exists ...
	I1217 00:37:24.297293  216682 ssh_runner.go:195] Run: systemctl --version
	I1217 00:37:24.297335  216682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-004564
	I1217 00:37:24.315512  216682 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/pause-004564/id_rsa Username:docker}
	I1217 00:37:24.405237  216682 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:37:24.416867  216682 pause.go:52] kubelet running: true
	I1217 00:37:24.416918  216682 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 00:37:24.550857  216682 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 00:37:24.550937  216682 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 00:37:24.619439  216682 cri.go:89] found id: "261675999c5abb3189891f43e3b1b48ba4936a0794d59ca8d49a88c9a6851f5b"
	I1217 00:37:24.619460  216682 cri.go:89] found id: "e75763c0a782db6b82bbcdce6755f2e6f1c075e2577e8608b78caad6bd9e0685"
	I1217 00:37:24.619466  216682 cri.go:89] found id: "312296c55960188c8b0406f7ef3b76b6aa39658e5891d4d5d9cb3e5c5de8a96b"
	I1217 00:37:24.619471  216682 cri.go:89] found id: "2ec6662e3cdaf85a7ed50022d863b9b780287c6ce573ffe5414b6462ad51e698"
	I1217 00:37:24.619476  216682 cri.go:89] found id: "8ad49dd6c20bb9213f368f20baf3c0d05e5d7b019e452f80bf3758ec6690483c"
	I1217 00:37:24.619481  216682 cri.go:89] found id: "822c708ab7dfc212f531b65b81db4f5bc4505719e6a921f57d1232f703310c0a"
	I1217 00:37:24.619485  216682 cri.go:89] found id: "75a2fcfe1b2251781ce5430b7e7160f17def85d218ac84eb522f00d0f2ce3ccb"
	I1217 00:37:24.619490  216682 cri.go:89] found id: ""
	I1217 00:37:24.619533  216682 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:37:24.631052  216682 retry.go:31] will retry after 190.876893ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:37:24Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:37:24.822487  216682 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:37:24.834842  216682 pause.go:52] kubelet running: false
	I1217 00:37:24.834902  216682 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 00:37:24.961041  216682 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 00:37:24.961141  216682 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 00:37:25.029735  216682 cri.go:89] found id: "261675999c5abb3189891f43e3b1b48ba4936a0794d59ca8d49a88c9a6851f5b"
	I1217 00:37:25.029752  216682 cri.go:89] found id: "e75763c0a782db6b82bbcdce6755f2e6f1c075e2577e8608b78caad6bd9e0685"
	I1217 00:37:25.029756  216682 cri.go:89] found id: "312296c55960188c8b0406f7ef3b76b6aa39658e5891d4d5d9cb3e5c5de8a96b"
	I1217 00:37:25.029759  216682 cri.go:89] found id: "2ec6662e3cdaf85a7ed50022d863b9b780287c6ce573ffe5414b6462ad51e698"
	I1217 00:37:25.029762  216682 cri.go:89] found id: "8ad49dd6c20bb9213f368f20baf3c0d05e5d7b019e452f80bf3758ec6690483c"
	I1217 00:37:25.029765  216682 cri.go:89] found id: "822c708ab7dfc212f531b65b81db4f5bc4505719e6a921f57d1232f703310c0a"
	I1217 00:37:25.029768  216682 cri.go:89] found id: "75a2fcfe1b2251781ce5430b7e7160f17def85d218ac84eb522f00d0f2ce3ccb"
	I1217 00:37:25.029771  216682 cri.go:89] found id: ""
	I1217 00:37:25.029822  216682 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:37:25.040834  216682 retry.go:31] will retry after 533.222405ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:37:25Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:37:25.575176  216682 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:37:25.588587  216682 pause.go:52] kubelet running: false
	I1217 00:37:25.588642  216682 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 00:37:25.701232  216682 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 00:37:25.701326  216682 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 00:37:25.771607  216682 cri.go:89] found id: "261675999c5abb3189891f43e3b1b48ba4936a0794d59ca8d49a88c9a6851f5b"
	I1217 00:37:25.771636  216682 cri.go:89] found id: "e75763c0a782db6b82bbcdce6755f2e6f1c075e2577e8608b78caad6bd9e0685"
	I1217 00:37:25.771642  216682 cri.go:89] found id: "312296c55960188c8b0406f7ef3b76b6aa39658e5891d4d5d9cb3e5c5de8a96b"
	I1217 00:37:25.771648  216682 cri.go:89] found id: "2ec6662e3cdaf85a7ed50022d863b9b780287c6ce573ffe5414b6462ad51e698"
	I1217 00:37:25.771653  216682 cri.go:89] found id: "8ad49dd6c20bb9213f368f20baf3c0d05e5d7b019e452f80bf3758ec6690483c"
	I1217 00:37:25.771658  216682 cri.go:89] found id: "822c708ab7dfc212f531b65b81db4f5bc4505719e6a921f57d1232f703310c0a"
	I1217 00:37:25.771662  216682 cri.go:89] found id: "75a2fcfe1b2251781ce5430b7e7160f17def85d218ac84eb522f00d0f2ce3ccb"
	I1217 00:37:25.771666  216682 cri.go:89] found id: ""
	I1217 00:37:25.771713  216682 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:37:25.783310  216682 retry.go:31] will retry after 590.362066ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:37:25Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:37:26.373829  216682 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:37:26.385744  216682 pause.go:52] kubelet running: false
	I1217 00:37:26.385797  216682 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 00:37:26.496425  216682 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 00:37:26.496506  216682 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 00:37:26.562398  216682 cri.go:89] found id: "261675999c5abb3189891f43e3b1b48ba4936a0794d59ca8d49a88c9a6851f5b"
	I1217 00:37:26.562427  216682 cri.go:89] found id: "e75763c0a782db6b82bbcdce6755f2e6f1c075e2577e8608b78caad6bd9e0685"
	I1217 00:37:26.562434  216682 cri.go:89] found id: "312296c55960188c8b0406f7ef3b76b6aa39658e5891d4d5d9cb3e5c5de8a96b"
	I1217 00:37:26.562437  216682 cri.go:89] found id: "2ec6662e3cdaf85a7ed50022d863b9b780287c6ce573ffe5414b6462ad51e698"
	I1217 00:37:26.562440  216682 cri.go:89] found id: "8ad49dd6c20bb9213f368f20baf3c0d05e5d7b019e452f80bf3758ec6690483c"
	I1217 00:37:26.562442  216682 cri.go:89] found id: "822c708ab7dfc212f531b65b81db4f5bc4505719e6a921f57d1232f703310c0a"
	I1217 00:37:26.562445  216682 cri.go:89] found id: "75a2fcfe1b2251781ce5430b7e7160f17def85d218ac84eb522f00d0f2ce3ccb"
	I1217 00:37:26.562448  216682 cri.go:89] found id: ""
	I1217 00:37:26.562481  216682 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:37:26.576275  216682 out.go:203] 
	W1217 00:37:26.577497  216682 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:37:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:37:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:37:26.577517  216682 out.go:285] * 
	* 
	W1217 00:37:26.581322  216682 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:37:26.582725  216682 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-004564 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-004564
helpers_test.go:244: (dbg) docker inspect pause-004564:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c272977a8ce7ffb0a5e2589b8ea71824443603a9dcca351180383dbfc7518ee6",
	        "Created": "2025-12-17T00:36:33.399458448Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 202101,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:36:35.039454406Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/c272977a8ce7ffb0a5e2589b8ea71824443603a9dcca351180383dbfc7518ee6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c272977a8ce7ffb0a5e2589b8ea71824443603a9dcca351180383dbfc7518ee6/hostname",
	        "HostsPath": "/var/lib/docker/containers/c272977a8ce7ffb0a5e2589b8ea71824443603a9dcca351180383dbfc7518ee6/hosts",
	        "LogPath": "/var/lib/docker/containers/c272977a8ce7ffb0a5e2589b8ea71824443603a9dcca351180383dbfc7518ee6/c272977a8ce7ffb0a5e2589b8ea71824443603a9dcca351180383dbfc7518ee6-json.log",
	        "Name": "/pause-004564",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-004564:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-004564",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c272977a8ce7ffb0a5e2589b8ea71824443603a9dcca351180383dbfc7518ee6",
	                "LowerDir": "/var/lib/docker/overlay2/4440d8fb5abcb6d1e92daf5f4e790f0143ce0c81d3a8f9ed7efb5badb13dda35-init/diff:/var/lib/docker/overlay2/594b812fd6d8db89dab322ea9e00d43dd555e9709fb5e6953e3873cce717392c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4440d8fb5abcb6d1e92daf5f4e790f0143ce0c81d3a8f9ed7efb5badb13dda35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4440d8fb5abcb6d1e92daf5f4e790f0143ce0c81d3a8f9ed7efb5badb13dda35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4440d8fb5abcb6d1e92daf5f4e790f0143ce0c81d3a8f9ed7efb5badb13dda35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-004564",
	                "Source": "/var/lib/docker/volumes/pause-004564/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-004564",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-004564",
	                "name.minikube.sigs.k8s.io": "pause-004564",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d204db33066d3718df4faf649607d290cbecd512b84dfe338ed045623d50a34b",
	            "SandboxKey": "/var/run/docker/netns/d204db33066d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-004564": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f54752e69b6f63b62ff8d5c8db4b91cbc4f8f60304c6d17c87da8cafe2ccc229",
	                    "EndpointID": "a85dd5b5ffc6d608e752b68e3344e98a5baefe84bfa12dfb8530904332eacc0a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "f2:4b:10:9b:7b:3e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-004564",
	                        "c272977a8ce7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-004564 -n pause-004564
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-004564 -n pause-004564: exit status 2 (318.687747ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-004564 logs -n 25
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p scheduled-stop-123503 --memory=3072 --driver=docker  --container-runtime=crio                                                         │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:34 UTC │ 17 Dec 25 00:34 UTC │
	│ stop    │ -p scheduled-stop-123503 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:34 UTC │                     │
	│ stop    │ -p scheduled-stop-123503 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:34 UTC │                     │
	│ stop    │ -p scheduled-stop-123503 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:34 UTC │                     │
	│ stop    │ -p scheduled-stop-123503 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:34 UTC │                     │
	│ stop    │ -p scheduled-stop-123503 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:34 UTC │                     │
	│ stop    │ -p scheduled-stop-123503 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:34 UTC │                     │
	│ stop    │ -p scheduled-stop-123503 --cancel-scheduled                                                                                              │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:34 UTC │ 17 Dec 25 00:34 UTC │
	│ stop    │ -p scheduled-stop-123503 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:35 UTC │                     │
	│ stop    │ -p scheduled-stop-123503 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:35 UTC │                     │
	│ stop    │ -p scheduled-stop-123503 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:35 UTC │ 17 Dec 25 00:35 UTC │
	│ delete  │ -p scheduled-stop-123503                                                                                                                 │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:36 UTC │ 17 Dec 25 00:36 UTC │
	│ start   │ -p insufficient-storage-503106 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-503106 │ jenkins │ v1.37.0 │ 17 Dec 25 00:36 UTC │                     │
	│ delete  │ -p insufficient-storage-503106                                                                                                           │ insufficient-storage-503106 │ jenkins │ v1.37.0 │ 17 Dec 25 00:36 UTC │ 17 Dec 25 00:36 UTC │
	│ start   │ -p offline-crio-981697 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-981697         │ jenkins │ v1.37.0 │ 17 Dec 25 00:36 UTC │ 17 Dec 25 00:37 UTC │
	│ start   │ -p pause-004564 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-004564                │ jenkins │ v1.37.0 │ 17 Dec 25 00:36 UTC │ 17 Dec 25 00:37 UTC │
	│ start   │ -p stopped-upgrade-028618 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-028618      │ jenkins │ v1.35.0 │ 17 Dec 25 00:36 UTC │ 17 Dec 25 00:36 UTC │
	│ start   │ -p missing-upgrade-043393 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-043393      │ jenkins │ v1.35.0 │ 17 Dec 25 00:36 UTC │ 17 Dec 25 00:37 UTC │
	│ stop    │ stopped-upgrade-028618 stop                                                                                                              │ stopped-upgrade-028618      │ jenkins │ v1.35.0 │ 17 Dec 25 00:36 UTC │ 17 Dec 25 00:37 UTC │
	│ start   │ -p stopped-upgrade-028618 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-028618      │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │                     │
	│ start   │ -p missing-upgrade-043393 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-043393      │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │                     │
	│ delete  │ -p offline-crio-981697                                                                                                                   │ offline-crio-981697         │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ start   │ -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-803959   │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │                     │
	│ start   │ -p pause-004564 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-004564                │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ pause   │ -p pause-004564 --alsologtostderr -v=5                                                                                                   │ pause-004564                │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:37:15
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:37:15.942494  214740 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:37:15.942590  214740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:37:15.942596  214740 out.go:374] Setting ErrFile to fd 2...
	I1217 00:37:15.942602  214740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:37:15.942848  214740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:37:15.943276  214740 out.go:368] Setting JSON to false
	I1217 00:37:15.944399  214740 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4786,"bootTime":1765927050,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:37:15.944454  214740 start.go:143] virtualization: kvm guest
	I1217 00:37:15.946542  214740 out.go:179] * [pause-004564] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:37:15.947888  214740 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:37:15.947903  214740 notify.go:221] Checking for updates...
	I1217 00:37:15.950580  214740 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:37:15.952122  214740 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:37:15.953569  214740 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:37:15.954844  214740 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:37:15.956199  214740 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:37:15.957959  214740 config.go:182] Loaded profile config "pause-004564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:37:15.958500  214740 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:37:15.983815  214740 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:37:15.983903  214740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:37:16.041574  214740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:83 SystemTime:2025-12-17 00:37:16.030862085 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:37:16.041679  214740 docker.go:319] overlay module found
	I1217 00:37:16.044433  214740 out.go:179] * Using the docker driver based on existing profile
	I1217 00:37:16.045731  214740 start.go:309] selected driver: docker
	I1217 00:37:16.045745  214740 start.go:927] validating driver "docker" against &{Name:pause-004564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-004564 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:37:16.045874  214740 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:37:16.045962  214740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:37:16.104337  214740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:83 SystemTime:2025-12-17 00:37:16.095373527 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:37:16.105251  214740 cni.go:84] Creating CNI manager for ""
	I1217 00:37:16.105326  214740 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:37:16.105387  214740 start.go:353] cluster config:
	{Name:pause-004564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-004564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:37:16.108114  214740 out.go:179] * Starting "pause-004564" primary control-plane node in "pause-004564" cluster
	I1217 00:37:16.109366  214740 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:37:16.110566  214740 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:37:16.111911  214740 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:37:16.111940  214740 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1217 00:37:16.111948  214740 cache.go:65] Caching tarball of preloaded images
	I1217 00:37:16.112023  214740 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:37:16.112147  214740 preload.go:238] Found /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 00:37:16.112162  214740 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 00:37:16.112284  214740 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/config.json ...
	I1217 00:37:16.132060  214740 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:37:16.132085  214740 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:37:16.132123  214740 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:37:16.132161  214740 start.go:360] acquireMachinesLock for pause-004564: {Name:mka8c0316ef00c32675091c9dd37d74ceb3222c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:37:16.132230  214740 start.go:364] duration metric: took 45.555µs to acquireMachinesLock for "pause-004564"
	I1217 00:37:16.132256  214740 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:37:16.132265  214740 fix.go:54] fixHost starting: 
	I1217 00:37:16.132551  214740 cli_runner.go:164] Run: docker container inspect pause-004564 --format={{.State.Status}}
	I1217 00:37:16.151118  214740 fix.go:112] recreateIfNeeded on pause-004564: state=Running err=<nil>
	W1217 00:37:16.151143  214740 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:37:14.873642  214288 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 00:37:14.873889  214288 start.go:159] libmachine.API.Create for "kubernetes-upgrade-803959" (driver="docker")
	I1217 00:37:14.873922  214288 client.go:173] LocalClient.Create starting
	I1217 00:37:14.874012  214288 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem
	I1217 00:37:14.874055  214288 main.go:143] libmachine: Decoding PEM data...
	I1217 00:37:14.874079  214288 main.go:143] libmachine: Parsing certificate...
	I1217 00:37:14.874163  214288 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem
	I1217 00:37:14.874194  214288 main.go:143] libmachine: Decoding PEM data...
	I1217 00:37:14.874213  214288 main.go:143] libmachine: Parsing certificate...
	I1217 00:37:14.874589  214288 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-803959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 00:37:14.890633  214288 cli_runner.go:211] docker network inspect kubernetes-upgrade-803959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 00:37:14.890687  214288 network_create.go:284] running [docker network inspect kubernetes-upgrade-803959] to gather additional debugging logs...
	I1217 00:37:14.890705  214288 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-803959
	W1217 00:37:14.908220  214288 cli_runner.go:211] docker network inspect kubernetes-upgrade-803959 returned with exit code 1
	I1217 00:37:14.908249  214288 network_create.go:287] error running [docker network inspect kubernetes-upgrade-803959]: docker network inspect kubernetes-upgrade-803959: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-803959 not found
	I1217 00:37:14.908263  214288 network_create.go:289] output of [docker network inspect kubernetes-upgrade-803959]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-803959 not found
	
	** /stderr **
	I1217 00:37:14.908360  214288 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:37:14.925178  214288 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ffd1d738f01 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3d:52:75:47:82} reservation:<nil>}
	I1217 00:37:14.925659  214288 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-280edd437675 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:ae:02:b5:f9:a6} reservation:<nil>}
	I1217 00:37:14.926286  214288 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9f28d049043c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:3f:8e:e9:44:56} reservation:<nil>}
	I1217 00:37:14.927337  214288 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e9a940}
	I1217 00:37:14.927364  214288 network_create.go:124] attempt to create docker network kubernetes-upgrade-803959 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1217 00:37:14.927402  214288 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-803959 kubernetes-upgrade-803959
	I1217 00:37:14.973750  214288 network_create.go:108] docker network kubernetes-upgrade-803959 192.168.76.0/24 created
	I1217 00:37:14.973783  214288 kic.go:121] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-803959" container
	I1217 00:37:14.973833  214288 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 00:37:14.990875  214288 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-803959 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-803959 --label created_by.minikube.sigs.k8s.io=true
	I1217 00:37:15.008010  214288 oci.go:103] Successfully created a docker volume kubernetes-upgrade-803959
	I1217 00:37:15.008076  214288 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-803959-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-803959 --entrypoint /usr/bin/test -v kubernetes-upgrade-803959:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 00:37:15.373085  214288 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-803959
	I1217 00:37:15.373143  214288 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 00:37:15.373154  214288 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 00:37:15.373210  214288 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-803959:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 00:37:19.677309  214288 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-803959:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.304047005s)
	I1217 00:37:19.677344  214288 kic.go:203] duration metric: took 4.30418664s to extract preloaded images to volume ...
	W1217 00:37:19.677421  214288 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 00:37:19.677446  214288 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 00:37:19.677491  214288 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 00:37:19.503285  211439 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 00:37:19.503324  211439 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	W1217 00:37:15.531496  213566 cli_runner.go:211] docker container inspect missing-upgrade-043393 --format={{.State.Status}} returned with exit code 1
	I1217 00:37:15.531550  213566 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-043393": docker container inspect missing-upgrade-043393 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-043393
	I1217 00:37:15.531564  213566 oci.go:673] temporary error: container missing-upgrade-043393 status is  but expect it to be exited
	I1217 00:37:15.531601  213566 retry.go:31] will retry after 1.394778783s: couldn't verify container is exited. %v: unknown state "missing-upgrade-043393": docker container inspect missing-upgrade-043393 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-043393
	I1217 00:37:16.927105  213566 cli_runner.go:164] Run: docker container inspect missing-upgrade-043393 --format={{.State.Status}}
	W1217 00:37:16.945629  213566 cli_runner.go:211] docker container inspect missing-upgrade-043393 --format={{.State.Status}} returned with exit code 1
	I1217 00:37:16.945706  213566 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-043393": docker container inspect missing-upgrade-043393 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-043393
	I1217 00:37:16.945721  213566 oci.go:673] temporary error: container missing-upgrade-043393 status is  but expect it to be exited
	I1217 00:37:16.945755  213566 retry.go:31] will retry after 3.402441004s: couldn't verify container is exited. %v: unknown state "missing-upgrade-043393": docker container inspect missing-upgrade-043393 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-043393
	I1217 00:37:20.351124  213566 cli_runner.go:164] Run: docker container inspect missing-upgrade-043393 --format={{.State.Status}}
	W1217 00:37:20.372430  213566 cli_runner.go:211] docker container inspect missing-upgrade-043393 --format={{.State.Status}} returned with exit code 1
	I1217 00:37:20.372516  213566 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-043393": docker container inspect missing-upgrade-043393 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-043393
	I1217 00:37:20.372532  213566 oci.go:673] temporary error: container missing-upgrade-043393 status is  but expect it to be exited
	I1217 00:37:20.372565  213566 retry.go:31] will retry after 3.019199494s: couldn't verify container is exited. %v: unknown state "missing-upgrade-043393": docker container inspect missing-upgrade-043393 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-043393
	I1217 00:37:16.153598  214740 out.go:252] * Updating the running docker "pause-004564" container ...
	I1217 00:37:16.153635  214740 machine.go:94] provisionDockerMachine start ...
	I1217 00:37:16.153701  214740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-004564
	I1217 00:37:16.172258  214740 main.go:143] libmachine: Using SSH client type: native
	I1217 00:37:16.172606  214740 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I1217 00:37:16.172628  214740 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:37:16.300764  214740 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-004564
	
	I1217 00:37:16.300793  214740 ubuntu.go:182] provisioning hostname "pause-004564"
	I1217 00:37:16.300867  214740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-004564
	I1217 00:37:16.321794  214740 main.go:143] libmachine: Using SSH client type: native
	I1217 00:37:16.322147  214740 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I1217 00:37:16.322170  214740 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-004564 && echo "pause-004564" | sudo tee /etc/hostname
	I1217 00:37:16.455301  214740 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-004564
	
	I1217 00:37:16.455395  214740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-004564
	I1217 00:37:16.473827  214740 main.go:143] libmachine: Using SSH client type: native
	I1217 00:37:16.474091  214740 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I1217 00:37:16.474123  214740 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-004564' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-004564/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-004564' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:37:16.600659  214740 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:37:16.600685  214740 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:37:16.600737  214740 ubuntu.go:190] setting up certificates
	I1217 00:37:16.600748  214740 provision.go:84] configureAuth start
	I1217 00:37:16.600801  214740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-004564
	I1217 00:37:16.621349  214740 provision.go:143] copyHostCerts
	I1217 00:37:16.621415  214740 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:37:16.621429  214740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:37:16.621493  214740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:37:16.621635  214740 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:37:16.621647  214740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:37:16.621673  214740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:37:16.621744  214740 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:37:16.621752  214740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:37:16.621775  214740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:37:16.621845  214740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.pause-004564 san=[127.0.0.1 192.168.85.2 localhost minikube pause-004564]
	I1217 00:37:16.769583  214740 provision.go:177] copyRemoteCerts
	I1217 00:37:16.769652  214740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:37:16.769706  214740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-004564
	I1217 00:37:16.788136  214740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/pause-004564/id_rsa Username:docker}
	I1217 00:37:16.882035  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:37:16.900209  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 00:37:16.916656  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 00:37:16.934684  214740 provision.go:87] duration metric: took 333.914753ms to configureAuth
	I1217 00:37:16.934724  214740 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:37:16.935020  214740 config.go:182] Loaded profile config "pause-004564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:37:16.935124  214740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-004564
	I1217 00:37:16.952944  214740 main.go:143] libmachine: Using SSH client type: native
	I1217 00:37:16.953256  214740 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I1217 00:37:16.953292  214740 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:37:19.847288  214740 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:37:19.847313  214740 machine.go:97] duration metric: took 3.693669477s to provisionDockerMachine
	I1217 00:37:19.847332  214740 start.go:293] postStartSetup for "pause-004564" (driver="docker")
	I1217 00:37:19.847345  214740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:37:19.847416  214740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:37:19.847470  214740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-004564
	I1217 00:37:19.869519  214740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/pause-004564/id_rsa Username:docker}
	I1217 00:37:19.965021  214740 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:37:19.969046  214740 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:37:19.969072  214740 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:37:19.969082  214740 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:37:19.969122  214740 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:37:19.969190  214740 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:37:19.969276  214740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:37:19.977369  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:37:19.994941  214740 start.go:296] duration metric: took 147.593732ms for postStartSetup
	I1217 00:37:19.995033  214740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:37:19.995082  214740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-004564
	I1217 00:37:20.014156  214740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/pause-004564/id_rsa Username:docker}
	I1217 00:37:20.106785  214740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:37:20.111391  214740 fix.go:56] duration metric: took 3.979116257s for fixHost
	I1217 00:37:20.111415  214740 start.go:83] releasing machines lock for "pause-004564", held for 3.979173755s
	I1217 00:37:20.111471  214740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-004564
	I1217 00:37:20.130280  214740 ssh_runner.go:195] Run: cat /version.json
	I1217 00:37:20.130336  214740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-004564
	I1217 00:37:20.130359  214740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:37:20.130429  214740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-004564
	I1217 00:37:20.149651  214740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/pause-004564/id_rsa Username:docker}
	I1217 00:37:20.150278  214740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/pause-004564/id_rsa Username:docker}
	I1217 00:37:20.322588  214740 ssh_runner.go:195] Run: systemctl --version
	I1217 00:37:20.329674  214740 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:37:20.370168  214740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:37:20.375433  214740 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:37:20.375494  214740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:37:20.384359  214740 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:37:20.384389  214740 start.go:496] detecting cgroup driver to use...
	I1217 00:37:20.384422  214740 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:37:20.384475  214740 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:37:20.400326  214740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:37:20.413876  214740 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:37:20.413943  214740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:37:20.428818  214740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:37:20.441055  214740 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:37:20.563323  214740 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:37:20.673543  214740 docker.go:234] disabling docker service ...
	I1217 00:37:20.673626  214740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:37:20.687952  214740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:37:20.699555  214740 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:37:20.806184  214740 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:37:20.911088  214740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:37:20.922920  214740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:37:20.936273  214740 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:37:20.936321  214740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:20.944471  214740 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:37:20.944511  214740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:20.952639  214740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:20.960687  214740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:20.968665  214740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:37:20.976017  214740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:20.983949  214740 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:20.991557  214740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:20.999490  214740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:37:21.006585  214740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:37:21.013358  214740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:37:21.113894  214740 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:37:21.280679  214740 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:37:21.280748  214740 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:37:21.284596  214740 start.go:564] Will wait 60s for crictl version
	I1217 00:37:21.284662  214740 ssh_runner.go:195] Run: which crictl
	I1217 00:37:21.287987  214740 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:37:21.311961  214740 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:37:21.312051  214740 ssh_runner.go:195] Run: crio --version
	I1217 00:37:21.338341  214740 ssh_runner.go:195] Run: crio --version
	I1217 00:37:21.365515  214740 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 00:37:21.366761  214740 cli_runner.go:164] Run: docker network inspect pause-004564 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:37:21.383632  214740 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 00:37:21.387730  214740 kubeadm.go:884] updating cluster {Name:pause-004564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-004564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:37:21.387887  214740 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:37:21.387941  214740 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:37:21.419683  214740 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:37:21.419713  214740 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:37:21.419765  214740 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:37:21.442611  214740 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:37:21.442631  214740 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:37:21.442638  214740 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1217 00:37:21.442747  214740 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-004564 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-004564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:37:21.442823  214740 ssh_runner.go:195] Run: crio config
	I1217 00:37:21.485181  214740 cni.go:84] Creating CNI manager for ""
	I1217 00:37:21.485204  214740 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:37:21.485219  214740 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:37:21.485246  214740 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-004564 NodeName:pause-004564 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:37:21.485388  214740 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-004564"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:37:21.485453  214740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 00:37:21.493520  214740 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:37:21.493582  214740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:37:21.501042  214740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1217 00:37:21.513195  214740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 00:37:21.525853  214740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1217 00:37:21.537812  214740 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:37:21.541199  214740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:37:21.648322  214740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:37:21.661734  214740 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564 for IP: 192.168.85.2
	I1217 00:37:21.661756  214740 certs.go:195] generating shared ca certs ...
	I1217 00:37:21.661770  214740 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:37:21.661924  214740 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:37:21.661973  214740 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:37:21.661984  214740 certs.go:257] generating profile certs ...
	I1217 00:37:21.662130  214740 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/client.key
	I1217 00:37:21.662187  214740 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/apiserver.key.40c312f8
	I1217 00:37:21.662240  214740 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/proxy-client.key
	I1217 00:37:21.662343  214740 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:37:21.662382  214740 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:37:21.662398  214740 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:37:21.662426  214740 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:37:21.662459  214740 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:37:21.662490  214740 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:37:21.662548  214740 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:37:21.663177  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:37:21.680387  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:37:21.697202  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:37:21.714133  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:37:21.730900  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 00:37:21.747761  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 00:37:21.765226  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:37:21.782757  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:37:21.800624  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:37:21.818143  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:37:21.834697  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:37:21.851180  214740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:37:21.862738  214740 ssh_runner.go:195] Run: openssl version
	I1217 00:37:21.868591  214740 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:37:21.875375  214740 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:37:21.882881  214740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:37:21.886393  214740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:37:21.886430  214740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:37:21.921718  214740 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:37:21.929529  214740 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16354.pem
	I1217 00:37:21.937114  214740 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16354.pem /etc/ssl/certs/16354.pem
	I1217 00:37:21.944010  214740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16354.pem
	I1217 00:37:21.947579  214740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:13 /usr/share/ca-certificates/16354.pem
	I1217 00:37:21.947618  214740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16354.pem
	I1217 00:37:21.982072  214740 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:37:21.989116  214740 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:37:21.996286  214740 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:37:22.003347  214740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:37:22.006887  214740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:37:22.006950  214740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	I1217 00:37:22.040695  214740 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:37:22.047892  214740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:37:22.051504  214740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:37:22.085055  214740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:37:22.118289  214740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:37:22.151209  214740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:37:22.184271  214740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:37:22.217101  214740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:37:22.250187  214740 kubeadm.go:401] StartCluster: {Name:pause-004564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-004564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:37:22.250327  214740 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:37:22.250371  214740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:37:22.276640  214740 cri.go:89] found id: "261675999c5abb3189891f43e3b1b48ba4936a0794d59ca8d49a88c9a6851f5b"
	I1217 00:37:22.276658  214740 cri.go:89] found id: "e75763c0a782db6b82bbcdce6755f2e6f1c075e2577e8608b78caad6bd9e0685"
	I1217 00:37:22.276671  214740 cri.go:89] found id: "312296c55960188c8b0406f7ef3b76b6aa39658e5891d4d5d9cb3e5c5de8a96b"
	I1217 00:37:22.276678  214740 cri.go:89] found id: "2ec6662e3cdaf85a7ed50022d863b9b780287c6ce573ffe5414b6462ad51e698"
	I1217 00:37:22.276683  214740 cri.go:89] found id: "8ad49dd6c20bb9213f368f20baf3c0d05e5d7b019e452f80bf3758ec6690483c"
	I1217 00:37:22.276688  214740 cri.go:89] found id: "822c708ab7dfc212f531b65b81db4f5bc4505719e6a921f57d1232f703310c0a"
	I1217 00:37:22.276693  214740 cri.go:89] found id: "75a2fcfe1b2251781ce5430b7e7160f17def85d218ac84eb522f00d0f2ce3ccb"
	I1217 00:37:22.276698  214740 cri.go:89] found id: ""
	I1217 00:37:22.276743  214740 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 00:37:22.287481  214740 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:37:22Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:37:22.287551  214740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:37:22.294970  214740 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:37:22.295007  214740 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:37:22.295044  214740 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:37:22.301902  214740 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:37:22.302541  214740 kubeconfig.go:125] found "pause-004564" server: "https://192.168.85.2:8443"
	I1217 00:37:22.303468  214740 kapi.go:59] client config for pause-004564: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/client.key", CAFile:"/home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:37:22.303851  214740 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 00:37:22.303871  214740 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 00:37:22.303878  214740 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 00:37:22.303883  214740 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 00:37:22.303889  214740 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 00:37:22.304221  214740 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:37:22.311322  214740 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1217 00:37:22.311348  214740 kubeadm.go:602] duration metric: took 16.335242ms to restartPrimaryControlPlane
	I1217 00:37:22.311358  214740 kubeadm.go:403] duration metric: took 61.180088ms to StartCluster
	I1217 00:37:22.311373  214740 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:37:22.311431  214740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:37:22.312307  214740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:37:22.312533  214740 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:37:22.312587  214740 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:37:22.312760  214740 config.go:182] Loaded profile config "pause-004564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:37:22.315075  214740 out.go:179] * Enabled addons: 
	I1217 00:37:22.315084  214740 out.go:179] * Verifying Kubernetes components...
	I1217 00:37:22.316126  214740 addons.go:530] duration metric: took 3.54579ms for enable addons: enabled=[]
	I1217 00:37:22.316167  214740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:37:22.420520  214740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:37:22.432775  214740 node_ready.go:35] waiting up to 6m0s for node "pause-004564" to be "Ready" ...
	I1217 00:37:22.440178  214740 node_ready.go:49] node "pause-004564" is "Ready"
	I1217 00:37:22.440203  214740 node_ready.go:38] duration metric: took 7.392338ms for node "pause-004564" to be "Ready" ...
	I1217 00:37:22.440216  214740 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:37:22.440262  214740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:37:22.451680  214740 api_server.go:72] duration metric: took 139.121359ms to wait for apiserver process to appear ...
	I1217 00:37:22.451701  214740 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:37:22.451721  214740 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 00:37:22.455335  214740 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 00:37:22.456064  214740 api_server.go:141] control plane version: v1.34.2
	I1217 00:37:22.456082  214740 api_server.go:131] duration metric: took 4.375754ms to wait for apiserver health ...
	I1217 00:37:22.456090  214740 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:37:22.458689  214740 system_pods.go:59] 7 kube-system pods found
	I1217 00:37:22.458718  214740 system_pods.go:61] "coredns-66bc5c9577-9srqt" [20274228-c195-4865-9917-969f5e20ced1] Running
	I1217 00:37:22.458724  214740 system_pods.go:61] "etcd-pause-004564" [21023270-c3c6-4b81-af27-19ee45d40732] Running
	I1217 00:37:22.458727  214740 system_pods.go:61] "kindnet-7hj2r" [ff07be68-3286-4d8e-8ef8-3d79b41d1932] Running
	I1217 00:37:22.458731  214740 system_pods.go:61] "kube-apiserver-pause-004564" [ac3fbc82-ca93-433f-bf43-490a35f17f4e] Running
	I1217 00:37:22.458735  214740 system_pods.go:61] "kube-controller-manager-pause-004564" [0b2d3115-6b57-401b-9dd8-0fd436ecbc04] Running
	I1217 00:37:22.458738  214740 system_pods.go:61] "kube-proxy-42nwb" [da8260dd-d848-423f-a84e-d902bebad105] Running
	I1217 00:37:22.458743  214740 system_pods.go:61] "kube-scheduler-pause-004564" [03c56ab0-b2f5-402f-955a-bf26dd76fd1a] Running
	I1217 00:37:22.458751  214740 system_pods.go:74] duration metric: took 2.656602ms to wait for pod list to return data ...
	I1217 00:37:22.458756  214740 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:37:22.460308  214740 default_sa.go:45] found service account: "default"
	I1217 00:37:22.460324  214740 default_sa.go:55] duration metric: took 1.562793ms for default service account to be created ...
	I1217 00:37:22.460331  214740 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 00:37:22.462416  214740 system_pods.go:86] 7 kube-system pods found
	I1217 00:37:22.462443  214740 system_pods.go:89] "coredns-66bc5c9577-9srqt" [20274228-c195-4865-9917-969f5e20ced1] Running
	I1217 00:37:22.462448  214740 system_pods.go:89] "etcd-pause-004564" [21023270-c3c6-4b81-af27-19ee45d40732] Running
	I1217 00:37:22.462451  214740 system_pods.go:89] "kindnet-7hj2r" [ff07be68-3286-4d8e-8ef8-3d79b41d1932] Running
	I1217 00:37:22.462455  214740 system_pods.go:89] "kube-apiserver-pause-004564" [ac3fbc82-ca93-433f-bf43-490a35f17f4e] Running
	I1217 00:37:22.462458  214740 system_pods.go:89] "kube-controller-manager-pause-004564" [0b2d3115-6b57-401b-9dd8-0fd436ecbc04] Running
	I1217 00:37:22.462461  214740 system_pods.go:89] "kube-proxy-42nwb" [da8260dd-d848-423f-a84e-d902bebad105] Running
	I1217 00:37:22.462465  214740 system_pods.go:89] "kube-scheduler-pause-004564" [03c56ab0-b2f5-402f-955a-bf26dd76fd1a] Running
	I1217 00:37:22.462470  214740 system_pods.go:126] duration metric: took 2.134549ms to wait for k8s-apps to be running ...
	I1217 00:37:22.462479  214740 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 00:37:22.462526  214740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:37:22.474146  214740 system_svc.go:56] duration metric: took 11.663396ms WaitForService to wait for kubelet
	I1217 00:37:22.474169  214740 kubeadm.go:587] duration metric: took 161.612311ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:37:22.474189  214740 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:37:22.475757  214740 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 00:37:22.475776  214740 node_conditions.go:123] node cpu capacity is 8
	I1217 00:37:22.475792  214740 node_conditions.go:105] duration metric: took 1.59607ms to run NodePressure ...
	I1217 00:37:22.475801  214740 start.go:242] waiting for startup goroutines ...
	I1217 00:37:22.475808  214740 start.go:247] waiting for cluster config update ...
	I1217 00:37:22.475815  214740 start.go:256] writing updated cluster config ...
	I1217 00:37:22.476088  214740 ssh_runner.go:195] Run: rm -f paused
	I1217 00:37:22.479453  214740 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:37:22.480042  214740 kapi.go:59] client config for pause-004564: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/client.key", CAFile:"/home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:37:22.482102  214740 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9srqt" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:22.485237  214740 pod_ready.go:94] pod "coredns-66bc5c9577-9srqt" is "Ready"
	I1217 00:37:22.485258  214740 pod_ready.go:86] duration metric: took 3.138722ms for pod "coredns-66bc5c9577-9srqt" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:22.486650  214740 pod_ready.go:83] waiting for pod "etcd-pause-004564" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:22.489557  214740 pod_ready.go:94] pod "etcd-pause-004564" is "Ready"
	I1217 00:37:22.489573  214740 pod_ready.go:86] duration metric: took 2.907346ms for pod "etcd-pause-004564" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:22.491011  214740 pod_ready.go:83] waiting for pod "kube-apiserver-pause-004564" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:22.493985  214740 pod_ready.go:94] pod "kube-apiserver-pause-004564" is "Ready"
	I1217 00:37:22.494025  214740 pod_ready.go:86] duration metric: took 2.996511ms for pod "kube-apiserver-pause-004564" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:22.495417  214740 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-004564" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:22.882759  214740 pod_ready.go:94] pod "kube-controller-manager-pause-004564" is "Ready"
	I1217 00:37:22.882785  214740 pod_ready.go:86] duration metric: took 387.352194ms for pod "kube-controller-manager-pause-004564" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:23.083532  214740 pod_ready.go:83] waiting for pod "kube-proxy-42nwb" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:23.483037  214740 pod_ready.go:94] pod "kube-proxy-42nwb" is "Ready"
	I1217 00:37:23.483064  214740 pod_ready.go:86] duration metric: took 399.507073ms for pod "kube-proxy-42nwb" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:23.682669  214740 pod_ready.go:83] waiting for pod "kube-scheduler-pause-004564" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:24.083039  214740 pod_ready.go:94] pod "kube-scheduler-pause-004564" is "Ready"
	I1217 00:37:24.083063  214740 pod_ready.go:86] duration metric: took 400.371974ms for pod "kube-scheduler-pause-004564" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:24.083074  214740 pod_ready.go:40] duration metric: took 1.603595614s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:37:24.124051  214740 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1217 00:37:24.126627  214740 out.go:179] * Done! kubectl is now configured to use "pause-004564" cluster and "default" namespace by default
	I1217 00:37:19.735684  214288 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-803959 --name kubernetes-upgrade-803959 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-803959 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-803959 --network kubernetes-upgrade-803959 --ip 192.168.76.2 --volume kubernetes-upgrade-803959:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 00:37:20.011686  214288 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-803959 --format={{.State.Running}}
	I1217 00:37:20.031100  214288 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-803959 --format={{.State.Status}}
	I1217 00:37:20.049221  214288 cli_runner.go:164] Run: docker exec kubernetes-upgrade-803959 stat /var/lib/dpkg/alternatives/iptables
	I1217 00:37:20.096515  214288 oci.go:144] the created container "kubernetes-upgrade-803959" has a running status.
	I1217 00:37:20.096540  214288 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/kubernetes-upgrade-803959/id_rsa...
	I1217 00:37:20.119063  214288 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-12816/.minikube/machines/kubernetes-upgrade-803959/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 00:37:20.149430  214288 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-803959 --format={{.State.Status}}
	I1217 00:37:20.172660  214288 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 00:37:20.172683  214288 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-803959 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 00:37:20.216510  214288 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-803959 --format={{.State.Status}}
	I1217 00:37:20.237344  214288 machine.go:94] provisionDockerMachine start ...
	I1217 00:37:20.237430  214288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-803959
	I1217 00:37:20.259176  214288 main.go:143] libmachine: Using SSH client type: native
	I1217 00:37:20.259560  214288 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1217 00:37:20.259584  214288 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:37:20.260314  214288 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41214->127.0.0.1:33003: read: connection reset by peer
	I1217 00:37:23.386034  214288 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-803959
	
	I1217 00:37:23.386062  214288 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-803959"
	I1217 00:37:23.386115  214288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-803959
	I1217 00:37:23.404553  214288 main.go:143] libmachine: Using SSH client type: native
	I1217 00:37:23.404832  214288 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1217 00:37:23.404855  214288 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-803959 && echo "kubernetes-upgrade-803959" | sudo tee /etc/hostname
	I1217 00:37:23.539464  214288 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-803959
	
	I1217 00:37:23.539539  214288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-803959
	I1217 00:37:23.556986  214288 main.go:143] libmachine: Using SSH client type: native
	I1217 00:37:23.557224  214288 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1217 00:37:23.557241  214288 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-803959' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-803959/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-803959' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:37:23.684037  214288 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:37:23.684061  214288 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:37:23.684082  214288 ubuntu.go:190] setting up certificates
	I1217 00:37:23.684097  214288 provision.go:84] configureAuth start
	I1217 00:37:23.684146  214288 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-803959
	I1217 00:37:23.702757  214288 provision.go:143] copyHostCerts
	I1217 00:37:23.702840  214288 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:37:23.702856  214288 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:37:23.702940  214288 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:37:23.703083  214288 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:37:23.703097  214288 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:37:23.703141  214288 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:37:23.703245  214288 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:37:23.703256  214288 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:37:23.703291  214288 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:37:23.703388  214288 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-803959 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-803959 localhost minikube]
	I1217 00:37:23.821073  214288 provision.go:177] copyRemoteCerts
	I1217 00:37:23.821135  214288 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:37:23.821166  214288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-803959
	I1217 00:37:23.838853  214288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kubernetes-upgrade-803959/id_rsa Username:docker}
	I1217 00:37:23.930845  214288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:37:23.950601  214288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1217 00:37:23.967643  214288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 00:37:23.984599  214288 provision.go:87] duration metric: took 300.480777ms to configureAuth
	I1217 00:37:23.984627  214288 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:37:23.984834  214288 config.go:182] Loaded profile config "kubernetes-upgrade-803959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 00:37:23.984956  214288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-803959
	I1217 00:37:24.002834  214288 main.go:143] libmachine: Using SSH client type: native
	I1217 00:37:24.003120  214288 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1217 00:37:24.003142  214288 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:37:24.274144  214288 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:37:24.274177  214288 machine.go:97] duration metric: took 4.036809944s to provisionDockerMachine
	I1217 00:37:24.274190  214288 client.go:176] duration metric: took 9.400259039s to LocalClient.Create
	I1217 00:37:24.274209  214288 start.go:167] duration metric: took 9.400321071s to libmachine.API.Create "kubernetes-upgrade-803959"
	I1217 00:37:24.274216  214288 start.go:293] postStartSetup for "kubernetes-upgrade-803959" (driver="docker")
	I1217 00:37:24.274225  214288 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:37:24.274273  214288 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:37:24.274317  214288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-803959
	I1217 00:37:24.294252  214288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kubernetes-upgrade-803959/id_rsa Username:docker}
	I1217 00:37:24.387176  214288 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:37:24.390481  214288 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:37:24.390508  214288 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:37:24.390517  214288 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:37:24.390563  214288 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:37:24.390652  214288 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:37:24.390767  214288 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:37:24.398008  214288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:37:24.417522  214288 start.go:296] duration metric: took 143.296478ms for postStartSetup
	I1217 00:37:24.417813  214288 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-803959
	I1217 00:37:24.435634  214288 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/config.json ...
	I1217 00:37:24.435858  214288 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:37:24.435898  214288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-803959
	I1217 00:37:24.460089  214288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kubernetes-upgrade-803959/id_rsa Username:docker}
	I1217 00:37:24.548666  214288 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:37:24.553649  214288 start.go:128] duration metric: took 9.681473905s to createHost
	I1217 00:37:24.553673  214288 start.go:83] releasing machines lock for "kubernetes-upgrade-803959", held for 9.681612447s
	I1217 00:37:24.553727  214288 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-803959
	I1217 00:37:24.572438  214288 ssh_runner.go:195] Run: cat /version.json
	I1217 00:37:24.572491  214288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-803959
	I1217 00:37:24.572516  214288 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:37:24.572613  214288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-803959
	I1217 00:37:24.592132  214288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kubernetes-upgrade-803959/id_rsa Username:docker}
	I1217 00:37:24.593010  214288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kubernetes-upgrade-803959/id_rsa Username:docker}
	I1217 00:37:24.507076  211439 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 00:37:24.507116  211439 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:37:23.394610  213566 cli_runner.go:164] Run: docker container inspect missing-upgrade-043393 --format={{.State.Status}}
	W1217 00:37:23.412514  213566 cli_runner.go:211] docker container inspect missing-upgrade-043393 --format={{.State.Status}} returned with exit code 1
	I1217 00:37:23.412583  213566 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-043393": docker container inspect missing-upgrade-043393 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-043393
	I1217 00:37:23.412598  213566 oci.go:673] temporary error: container missing-upgrade-043393 status is  but expect it to be exited
	I1217 00:37:23.412634  213566 retry.go:31] will retry after 7.521946715s: couldn't verify container is exited. %v: unknown state "missing-upgrade-043393": docker container inspect missing-upgrade-043393 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-043393
	I1217 00:37:24.735118  214288 ssh_runner.go:195] Run: systemctl --version
	I1217 00:37:24.741360  214288 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:37:24.775927  214288 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:37:24.780968  214288 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:37:24.781050  214288 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:37:24.806625  214288 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 00:37:24.806647  214288 start.go:496] detecting cgroup driver to use...
	I1217 00:37:24.806676  214288 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:37:24.806715  214288 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:37:24.822377  214288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:37:24.834361  214288 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:37:24.834406  214288 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:37:24.850860  214288 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:37:24.875561  214288 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:37:24.969028  214288 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:37:25.064939  214288 docker.go:234] disabling docker service ...
	I1217 00:37:25.065006  214288 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:37:25.082208  214288 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:37:25.095490  214288 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:37:25.175274  214288 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:37:25.253975  214288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:37:25.266329  214288 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:37:25.279486  214288 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1217 00:37:25.279536  214288 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:25.289029  214288 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:37:25.289078  214288 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:25.297239  214288 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:25.305121  214288 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:25.313119  214288 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:37:25.320471  214288 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:25.328187  214288 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:25.341852  214288 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:25.349964  214288 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:37:25.357414  214288 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:37:25.364181  214288 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:37:25.439737  214288 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:37:25.568057  214288 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:37:25.568126  214288 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:37:25.571936  214288 start.go:564] Will wait 60s for crictl version
	I1217 00:37:25.571984  214288 ssh_runner.go:195] Run: which crictl
	I1217 00:37:25.575398  214288 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:37:25.600173  214288 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:37:25.600257  214288 ssh_runner.go:195] Run: crio --version
	I1217 00:37:25.633214  214288 ssh_runner.go:195] Run: crio --version
	I1217 00:37:25.661443  214288 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	
	
	==> CRI-O <==
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.202831072Z" level=info msg="RDT not available in the host system"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.202846431Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.203564748Z" level=info msg="Conmon does support the --sync option"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.203586484Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.203599358Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.204358357Z" level=info msg="Conmon does support the --sync option"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.204372319Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.207867548Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.207899813Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.208455562Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.208787599Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.208849765Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.276615123Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-9srqt Namespace:kube-system ID:9e94d64b900383c07367c4aebb899789be79a11c1f806f060a08a81fae82ceb7 UID:20274228-c195-4865-9917-969f5e20ced1 NetNS:/var/run/netns/8cac6a75-50c6-43fb-8cf6-464b36b4b831 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000732218}] Aliases:map[]}"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.276775427Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-9srqt for CNI network kindnet (type=ptp)"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.277177051Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.277199977Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.27723524Z" level=info msg="Create NRI interface"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.277303217Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.277308667Z" level=info msg="runtime interface created"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.277317574Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.27732269Z" level=info msg="runtime interface starting up..."
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.277327383Z" level=info msg="starting plugins..."
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.277336272Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.277612672Z" level=info msg="No systemd watchdog enabled"
	Dec 17 00:37:21 pause-004564 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	261675999c5ab       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago      Running             coredns                   0                   9e94d64b90038       coredns-66bc5c9577-9srqt               kube-system
	e75763c0a782d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   25 seconds ago      Running             kindnet-cni               0                   27daaa464ebaa       kindnet-7hj2r                          kube-system
	312296c559601       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   25 seconds ago      Running             kube-proxy                0                   dde2846856718       kube-proxy-42nwb                       kube-system
	2ec6662e3cdaf       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   36 seconds ago      Running             kube-controller-manager   0                   00fd6dd5f49ea       kube-controller-manager-pause-004564   kube-system
	8ad49dd6c20bb       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   36 seconds ago      Running             kube-apiserver            0                   ad054bb0b24f2       kube-apiserver-pause-004564            kube-system
	822c708ab7dfc       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   36 seconds ago      Running             kube-scheduler            0                   1c74ccab50b6b       kube-scheduler-pause-004564            kube-system
	75a2fcfe1b225       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   36 seconds ago      Running             etcd                      0                   b8f5ca0b0e943       etcd-pause-004564                      kube-system
	
	
	==> coredns [261675999c5abb3189891f43e3b1b48ba4936a0794d59ca8d49a88c9a6851f5b] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55164 - 4267 "HINFO IN 5990849036606057374.3411692580121151747. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.534861589s
	
	
	==> describe nodes <==
	Name:               pause-004564
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-004564
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=pause-004564
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_36_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:36:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-004564
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:37:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:37:12 +0000   Wed, 17 Dec 2025 00:36:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:37:12 +0000   Wed, 17 Dec 2025 00:36:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:37:12 +0000   Wed, 17 Dec 2025 00:36:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 00:37:12 +0000   Wed, 17 Dec 2025 00:37:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-004564
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                f5676ac5-495b-4003-91d5-52b86dab3f6f
	  Boot ID:                    0e9cedc6-c46e-4354-b3d2-9272a8b33ae5
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-9srqt                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-pause-004564                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-7hj2r                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-pause-004564             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-pause-004564    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-42nwb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-pause-004564             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node pause-004564 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node pause-004564 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node pause-004564 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node pause-004564 event: Registered Node pause-004564 in Controller
	  Normal  NodeReady                15s   kubelet          Node pause-004564 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.089382] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024236] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.864694] kauditd_printk_skb: 47 callbacks suppressed
	[Dec17 00:07] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.006904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +2.048755] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +4.030595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +8.447143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[ +16.382404] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000015] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[Dec17 00:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	
	
	==> etcd [75a2fcfe1b2251781ce5430b7e7160f17def85d218ac84eb522f00d0f2ce3ccb] <==
	{"level":"warn","ts":"2025-12-17T00:36:52.506177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.521812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.538384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.553220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.566369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.577451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.590472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.601524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.616015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.646815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.657793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.671875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.684544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.692838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.699787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.708310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.715836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.723929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.733370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.741616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.752136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.761743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.780592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.792464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.894318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50452","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:37:27 up  1:19,  0 user,  load average: 3.84, 1.97, 1.36
	Linux pause-004564 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e75763c0a782db6b82bbcdce6755f2e6f1c075e2577e8608b78caad6bd9e0685] <==
	I1217 00:37:02.043647       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 00:37:02.043906       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1217 00:37:02.047096       1 main.go:148] setting mtu 1500 for CNI 
	I1217 00:37:02.047128       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 00:37:02.047150       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T00:37:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 00:37:02.244623       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 00:37:02.245299       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 00:37:02.245315       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 00:37:02.245604       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 00:37:02.641574       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 00:37:02.641625       1 metrics.go:72] Registering metrics
	I1217 00:37:02.641739       1 controller.go:711] "Syncing nftables rules"
	I1217 00:37:12.245062       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 00:37:12.245141       1 main.go:301] handling current node
	I1217 00:37:22.251242       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 00:37:22.251275       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8ad49dd6c20bb9213f368f20baf3c0d05e5d7b019e452f80bf3758ec6690483c] <==
	I1217 00:36:53.651211       1 policy_source.go:240] refreshing policies
	E1217 00:36:53.691350       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1217 00:36:53.693473       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 00:36:53.696408       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:36:53.698604       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1217 00:36:53.709139       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:36:53.710099       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 00:36:53.833727       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 00:36:54.497137       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1217 00:36:54.501724       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1217 00:36:54.501742       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 00:36:54.939953       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 00:36:54.975874       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 00:36:55.093438       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 00:36:55.106232       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1217 00:36:55.107497       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 00:36:55.119400       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 00:36:55.552524       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 00:36:55.858741       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 00:36:55.867547       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 00:36:55.873821       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 00:37:01.255538       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:37:01.258537       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:37:01.404780       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1217 00:37:01.603308       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2ec6662e3cdaf85a7ed50022d863b9b780287c6ce573ffe5414b6462ad51e698] <==
	I1217 00:37:00.550267       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 00:37:00.551425       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 00:37:00.551438       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1217 00:37:00.551575       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1217 00:37:00.551581       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1217 00:37:00.551605       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 00:37:00.551772       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 00:37:00.552326       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 00:37:00.552338       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1217 00:37:00.552591       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 00:37:00.553839       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 00:37:00.553931       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 00:37:00.553973       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 00:37:00.554053       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 00:37:00.555702       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 00:37:00.556477       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1217 00:37:00.556540       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1217 00:37:00.556654       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 00:37:00.556676       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 00:37:00.556683       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 00:37:00.559433       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 00:37:00.563209       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-004564" podCIDRs=["10.244.0.0/24"]
	I1217 00:37:00.572958       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 00:37:00.576071       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 00:37:15.484035       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [312296c55960188c8b0406f7ef3b76b6aa39658e5891d4d5d9cb3e5c5de8a96b] <==
	I1217 00:37:01.830482       1 server_linux.go:53] "Using iptables proxy"
	I1217 00:37:01.907264       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 00:37:02.007680       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 00:37:02.007735       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1217 00:37:02.007836       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 00:37:02.028897       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 00:37:02.029067       1 server_linux.go:132] "Using iptables Proxier"
	I1217 00:37:02.034659       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 00:37:02.034944       1 server.go:527] "Version info" version="v1.34.2"
	I1217 00:37:02.034965       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:37:02.036592       1 config.go:309] "Starting node config controller"
	I1217 00:37:02.036610       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 00:37:02.036617       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 00:37:02.036724       1 config.go:200] "Starting service config controller"
	I1217 00:37:02.036733       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 00:37:02.036755       1 config.go:106] "Starting endpoint slice config controller"
	I1217 00:37:02.036760       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 00:37:02.036780       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 00:37:02.036792       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 00:37:02.137305       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 00:37:02.137331       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 00:37:02.137303       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [822c708ab7dfc212f531b65b81db4f5bc4505719e6a921f57d1232f703310c0a] <==
	E1217 00:36:53.592897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 00:36:53.593005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 00:36:53.593413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 00:36:53.593470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 00:36:53.593511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 00:36:53.593554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 00:36:53.594300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 00:36:53.594384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 00:36:53.595513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 00:36:53.594645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 00:36:53.594695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 00:36:53.594733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 00:36:53.594754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 00:36:53.595104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 00:36:53.594586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 00:36:53.601668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 00:36:54.428558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 00:36:54.439982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1217 00:36:54.474254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 00:36:54.521787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 00:36:54.578947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 00:36:54.606777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 00:36:54.743335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 00:36:54.746434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1217 00:36:56.884929       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 00:37:17 pause-004564 kubelet[1323]: E1217 00:37:17.712238    1323 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 00:37:17 pause-004564 kubelet[1323]: E1217 00:37:17.712255    1323 kubelet.go:2614] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 00:37:17 pause-004564 kubelet[1323]: E1217 00:37:17.770561    1323 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 17 00:37:17 pause-004564 kubelet[1323]: E1217 00:37:17.770642    1323 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 00:37:17 pause-004564 kubelet[1323]: E1217 00:37:17.770665    1323 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 00:37:17 pause-004564 kubelet[1323]: W1217 00:37:17.812859    1323 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 17 00:37:17 pause-004564 kubelet[1323]: W1217 00:37:17.987064    1323 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 17 00:37:18 pause-004564 kubelet[1323]: W1217 00:37:18.256881    1323 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 17 00:37:18 pause-004564 kubelet[1323]: W1217 00:37:18.626148    1323 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 17 00:37:18 pause-004564 kubelet[1323]: E1217 00:37:18.771597    1323 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 17 00:37:18 pause-004564 kubelet[1323]: E1217 00:37:18.771645    1323 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 00:37:18 pause-004564 kubelet[1323]: E1217 00:37:18.771656    1323 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 00:37:19 pause-004564 kubelet[1323]: W1217 00:37:19.411246    1323 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 17 00:37:19 pause-004564 kubelet[1323]: E1217 00:37:19.711988    1323 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Dec 17 00:37:19 pause-004564 kubelet[1323]: E1217 00:37:19.712135    1323 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 00:37:19 pause-004564 kubelet[1323]: E1217 00:37:19.712161    1323 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 00:37:19 pause-004564 kubelet[1323]: E1217 00:37:19.712194    1323 kubelet.go:2614] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 00:37:19 pause-004564 kubelet[1323]: E1217 00:37:19.772657    1323 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 17 00:37:19 pause-004564 kubelet[1323]: E1217 00:37:19.772722    1323 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 00:37:19 pause-004564 kubelet[1323]: E1217 00:37:19.772747    1323 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 00:37:24 pause-004564 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 00:37:24 pause-004564 kubelet[1323]: I1217 00:37:24.528554    1323 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 17 00:37:24 pause-004564 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 00:37:24 pause-004564 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:37:24 pause-004564 systemd[1]: kubelet.service: Consumed 1.138s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-004564 -n pause-004564
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-004564 -n pause-004564: exit status 2 (313.006292ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-004564 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-004564
helpers_test.go:244: (dbg) docker inspect pause-004564:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c272977a8ce7ffb0a5e2589b8ea71824443603a9dcca351180383dbfc7518ee6",
	        "Created": "2025-12-17T00:36:33.399458448Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 202101,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:36:35.039454406Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/c272977a8ce7ffb0a5e2589b8ea71824443603a9dcca351180383dbfc7518ee6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c272977a8ce7ffb0a5e2589b8ea71824443603a9dcca351180383dbfc7518ee6/hostname",
	        "HostsPath": "/var/lib/docker/containers/c272977a8ce7ffb0a5e2589b8ea71824443603a9dcca351180383dbfc7518ee6/hosts",
	        "LogPath": "/var/lib/docker/containers/c272977a8ce7ffb0a5e2589b8ea71824443603a9dcca351180383dbfc7518ee6/c272977a8ce7ffb0a5e2589b8ea71824443603a9dcca351180383dbfc7518ee6-json.log",
	        "Name": "/pause-004564",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-004564:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-004564",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c272977a8ce7ffb0a5e2589b8ea71824443603a9dcca351180383dbfc7518ee6",
	                "LowerDir": "/var/lib/docker/overlay2/4440d8fb5abcb6d1e92daf5f4e790f0143ce0c81d3a8f9ed7efb5badb13dda35-init/diff:/var/lib/docker/overlay2/594b812fd6d8db89dab322ea9e00d43dd555e9709fb5e6953e3873cce717392c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4440d8fb5abcb6d1e92daf5f4e790f0143ce0c81d3a8f9ed7efb5badb13dda35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4440d8fb5abcb6d1e92daf5f4e790f0143ce0c81d3a8f9ed7efb5badb13dda35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4440d8fb5abcb6d1e92daf5f4e790f0143ce0c81d3a8f9ed7efb5badb13dda35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-004564",
	                "Source": "/var/lib/docker/volumes/pause-004564/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-004564",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-004564",
	                "name.minikube.sigs.k8s.io": "pause-004564",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d204db33066d3718df4faf649607d290cbecd512b84dfe338ed045623d50a34b",
	            "SandboxKey": "/var/run/docker/netns/d204db33066d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-004564": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f54752e69b6f63b62ff8d5c8db4b91cbc4f8f60304c6d17c87da8cafe2ccc229",
	                    "EndpointID": "a85dd5b5ffc6d608e752b68e3344e98a5baefe84bfa12dfb8530904332eacc0a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "f2:4b:10:9b:7b:3e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-004564",
	                        "c272977a8ce7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-004564 -n pause-004564
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-004564 -n pause-004564: exit status 2 (309.677475ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-004564 logs -n 25
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p scheduled-stop-123503 --memory=3072 --driver=docker  --container-runtime=crio                                                         │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:34 UTC │ 17 Dec 25 00:34 UTC │
	│ stop    │ -p scheduled-stop-123503 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:34 UTC │                     │
	│ stop    │ -p scheduled-stop-123503 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:34 UTC │                     │
	│ stop    │ -p scheduled-stop-123503 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:34 UTC │                     │
	│ stop    │ -p scheduled-stop-123503 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:34 UTC │                     │
	│ stop    │ -p scheduled-stop-123503 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:34 UTC │                     │
	│ stop    │ -p scheduled-stop-123503 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:34 UTC │                     │
	│ stop    │ -p scheduled-stop-123503 --cancel-scheduled                                                                                              │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:34 UTC │ 17 Dec 25 00:34 UTC │
	│ stop    │ -p scheduled-stop-123503 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:35 UTC │                     │
	│ stop    │ -p scheduled-stop-123503 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:35 UTC │                     │
	│ stop    │ -p scheduled-stop-123503 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:35 UTC │ 17 Dec 25 00:35 UTC │
	│ delete  │ -p scheduled-stop-123503                                                                                                                 │ scheduled-stop-123503       │ jenkins │ v1.37.0 │ 17 Dec 25 00:36 UTC │ 17 Dec 25 00:36 UTC │
	│ start   │ -p insufficient-storage-503106 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-503106 │ jenkins │ v1.37.0 │ 17 Dec 25 00:36 UTC │                     │
	│ delete  │ -p insufficient-storage-503106                                                                                                           │ insufficient-storage-503106 │ jenkins │ v1.37.0 │ 17 Dec 25 00:36 UTC │ 17 Dec 25 00:36 UTC │
	│ start   │ -p offline-crio-981697 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-981697         │ jenkins │ v1.37.0 │ 17 Dec 25 00:36 UTC │ 17 Dec 25 00:37 UTC │
	│ start   │ -p pause-004564 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-004564                │ jenkins │ v1.37.0 │ 17 Dec 25 00:36 UTC │ 17 Dec 25 00:37 UTC │
	│ start   │ -p stopped-upgrade-028618 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-028618      │ jenkins │ v1.35.0 │ 17 Dec 25 00:36 UTC │ 17 Dec 25 00:36 UTC │
	│ start   │ -p missing-upgrade-043393 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-043393      │ jenkins │ v1.35.0 │ 17 Dec 25 00:36 UTC │ 17 Dec 25 00:37 UTC │
	│ stop    │ stopped-upgrade-028618 stop                                                                                                              │ stopped-upgrade-028618      │ jenkins │ v1.35.0 │ 17 Dec 25 00:36 UTC │ 17 Dec 25 00:37 UTC │
	│ start   │ -p stopped-upgrade-028618 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-028618      │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │                     │
	│ start   │ -p missing-upgrade-043393 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-043393      │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │                     │
	│ delete  │ -p offline-crio-981697                                                                                                                   │ offline-crio-981697         │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ start   │ -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-803959   │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │                     │
	│ start   │ -p pause-004564 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-004564                │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
	│ pause   │ -p pause-004564 --alsologtostderr -v=5                                                                                                   │ pause-004564                │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:37:15
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:37:15.942494  214740 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:37:15.942590  214740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:37:15.942596  214740 out.go:374] Setting ErrFile to fd 2...
	I1217 00:37:15.942602  214740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:37:15.942848  214740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:37:15.943276  214740 out.go:368] Setting JSON to false
	I1217 00:37:15.944399  214740 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4786,"bootTime":1765927050,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:37:15.944454  214740 start.go:143] virtualization: kvm guest
	I1217 00:37:15.946542  214740 out.go:179] * [pause-004564] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:37:15.947888  214740 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:37:15.947903  214740 notify.go:221] Checking for updates...
	I1217 00:37:15.950580  214740 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:37:15.952122  214740 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:37:15.953569  214740 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:37:15.954844  214740 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:37:15.956199  214740 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:37:15.957959  214740 config.go:182] Loaded profile config "pause-004564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:37:15.958500  214740 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:37:15.983815  214740 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:37:15.983903  214740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:37:16.041574  214740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:83 SystemTime:2025-12-17 00:37:16.030862085 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:37:16.041679  214740 docker.go:319] overlay module found
	I1217 00:37:16.044433  214740 out.go:179] * Using the docker driver based on existing profile
	I1217 00:37:16.045731  214740 start.go:309] selected driver: docker
	I1217 00:37:16.045745  214740 start.go:927] validating driver "docker" against &{Name:pause-004564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-004564 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:37:16.045874  214740 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:37:16.045962  214740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:37:16.104337  214740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:83 SystemTime:2025-12-17 00:37:16.095373527 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:37:16.105251  214740 cni.go:84] Creating CNI manager for ""
	I1217 00:37:16.105326  214740 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:37:16.105387  214740 start.go:353] cluster config:
	{Name:pause-004564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-004564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:37:16.108114  214740 out.go:179] * Starting "pause-004564" primary control-plane node in "pause-004564" cluster
	I1217 00:37:16.109366  214740 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:37:16.110566  214740 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:37:16.111911  214740 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:37:16.111940  214740 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1217 00:37:16.111948  214740 cache.go:65] Caching tarball of preloaded images
	I1217 00:37:16.112023  214740 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:37:16.112147  214740 preload.go:238] Found /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 00:37:16.112162  214740 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 00:37:16.112284  214740 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/config.json ...
	I1217 00:37:16.132060  214740 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:37:16.132085  214740 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:37:16.132123  214740 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:37:16.132161  214740 start.go:360] acquireMachinesLock for pause-004564: {Name:mka8c0316ef00c32675091c9dd37d74ceb3222c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:37:16.132230  214740 start.go:364] duration metric: took 45.555µs to acquireMachinesLock for "pause-004564"
	I1217 00:37:16.132256  214740 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:37:16.132265  214740 fix.go:54] fixHost starting: 
	I1217 00:37:16.132551  214740 cli_runner.go:164] Run: docker container inspect pause-004564 --format={{.State.Status}}
	I1217 00:37:16.151118  214740 fix.go:112] recreateIfNeeded on pause-004564: state=Running err=<nil>
	W1217 00:37:16.151143  214740 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:37:14.873642  214288 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 00:37:14.873889  214288 start.go:159] libmachine.API.Create for "kubernetes-upgrade-803959" (driver="docker")
	I1217 00:37:14.873922  214288 client.go:173] LocalClient.Create starting
	I1217 00:37:14.874012  214288 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem
	I1217 00:37:14.874055  214288 main.go:143] libmachine: Decoding PEM data...
	I1217 00:37:14.874079  214288 main.go:143] libmachine: Parsing certificate...
	I1217 00:37:14.874163  214288 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem
	I1217 00:37:14.874194  214288 main.go:143] libmachine: Decoding PEM data...
	I1217 00:37:14.874213  214288 main.go:143] libmachine: Parsing certificate...
	I1217 00:37:14.874589  214288 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-803959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 00:37:14.890633  214288 cli_runner.go:211] docker network inspect kubernetes-upgrade-803959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 00:37:14.890687  214288 network_create.go:284] running [docker network inspect kubernetes-upgrade-803959] to gather additional debugging logs...
	I1217 00:37:14.890705  214288 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-803959
	W1217 00:37:14.908220  214288 cli_runner.go:211] docker network inspect kubernetes-upgrade-803959 returned with exit code 1
	I1217 00:37:14.908249  214288 network_create.go:287] error running [docker network inspect kubernetes-upgrade-803959]: docker network inspect kubernetes-upgrade-803959: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-803959 not found
	I1217 00:37:14.908263  214288 network_create.go:289] output of [docker network inspect kubernetes-upgrade-803959]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-803959 not found
	
	** /stderr **
	I1217 00:37:14.908360  214288 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:37:14.925178  214288 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ffd1d738f01 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3d:52:75:47:82} reservation:<nil>}
	I1217 00:37:14.925659  214288 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-280edd437675 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:ae:02:b5:f9:a6} reservation:<nil>}
	I1217 00:37:14.926286  214288 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9f28d049043c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:3f:8e:e9:44:56} reservation:<nil>}
	I1217 00:37:14.927337  214288 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e9a940}
	I1217 00:37:14.927364  214288 network_create.go:124] attempt to create docker network kubernetes-upgrade-803959 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1217 00:37:14.927402  214288 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-803959 kubernetes-upgrade-803959
	I1217 00:37:14.973750  214288 network_create.go:108] docker network kubernetes-upgrade-803959 192.168.76.0/24 created
	I1217 00:37:14.973783  214288 kic.go:121] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-803959" container
	I1217 00:37:14.973833  214288 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 00:37:14.990875  214288 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-803959 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-803959 --label created_by.minikube.sigs.k8s.io=true
	I1217 00:37:15.008010  214288 oci.go:103] Successfully created a docker volume kubernetes-upgrade-803959
	I1217 00:37:15.008076  214288 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-803959-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-803959 --entrypoint /usr/bin/test -v kubernetes-upgrade-803959:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 00:37:15.373085  214288 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-803959
	I1217 00:37:15.373143  214288 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 00:37:15.373154  214288 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 00:37:15.373210  214288 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-803959:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 00:37:19.677309  214288 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-803959:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.304047005s)
	I1217 00:37:19.677344  214288 kic.go:203] duration metric: took 4.30418664s to extract preloaded images to volume ...
	W1217 00:37:19.677421  214288 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 00:37:19.677446  214288 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 00:37:19.677491  214288 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 00:37:19.503285  211439 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 00:37:19.503324  211439 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	W1217 00:37:15.531496  213566 cli_runner.go:211] docker container inspect missing-upgrade-043393 --format={{.State.Status}} returned with exit code 1
	I1217 00:37:15.531550  213566 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-043393": docker container inspect missing-upgrade-043393 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-043393
	I1217 00:37:15.531564  213566 oci.go:673] temporary error: container missing-upgrade-043393 status is  but expect it to be exited
	I1217 00:37:15.531601  213566 retry.go:31] will retry after 1.394778783s: couldn't verify container is exited. %v: unknown state "missing-upgrade-043393": docker container inspect missing-upgrade-043393 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-043393
	I1217 00:37:16.927105  213566 cli_runner.go:164] Run: docker container inspect missing-upgrade-043393 --format={{.State.Status}}
	W1217 00:37:16.945629  213566 cli_runner.go:211] docker container inspect missing-upgrade-043393 --format={{.State.Status}} returned with exit code 1
	I1217 00:37:16.945706  213566 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-043393": docker container inspect missing-upgrade-043393 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-043393
	I1217 00:37:16.945721  213566 oci.go:673] temporary error: container missing-upgrade-043393 status is  but expect it to be exited
	I1217 00:37:16.945755  213566 retry.go:31] will retry after 3.402441004s: couldn't verify container is exited. %v: unknown state "missing-upgrade-043393": docker container inspect missing-upgrade-043393 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-043393
	I1217 00:37:20.351124  213566 cli_runner.go:164] Run: docker container inspect missing-upgrade-043393 --format={{.State.Status}}
	W1217 00:37:20.372430  213566 cli_runner.go:211] docker container inspect missing-upgrade-043393 --format={{.State.Status}} returned with exit code 1
	I1217 00:37:20.372516  213566 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-043393": docker container inspect missing-upgrade-043393 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-043393
	I1217 00:37:20.372532  213566 oci.go:673] temporary error: container missing-upgrade-043393 status is  but expect it to be exited
	I1217 00:37:20.372565  213566 retry.go:31] will retry after 3.019199494s: couldn't verify container is exited. %v: unknown state "missing-upgrade-043393": docker container inspect missing-upgrade-043393 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-043393
	I1217 00:37:16.153598  214740 out.go:252] * Updating the running docker "pause-004564" container ...
	I1217 00:37:16.153635  214740 machine.go:94] provisionDockerMachine start ...
	I1217 00:37:16.153701  214740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-004564
	I1217 00:37:16.172258  214740 main.go:143] libmachine: Using SSH client type: native
	I1217 00:37:16.172606  214740 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I1217 00:37:16.172628  214740 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:37:16.300764  214740 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-004564
	
	I1217 00:37:16.300793  214740 ubuntu.go:182] provisioning hostname "pause-004564"
	I1217 00:37:16.300867  214740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-004564
	I1217 00:37:16.321794  214740 main.go:143] libmachine: Using SSH client type: native
	I1217 00:37:16.322147  214740 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I1217 00:37:16.322170  214740 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-004564 && echo "pause-004564" | sudo tee /etc/hostname
	I1217 00:37:16.455301  214740 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-004564
	
	I1217 00:37:16.455395  214740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-004564
	I1217 00:37:16.473827  214740 main.go:143] libmachine: Using SSH client type: native
	I1217 00:37:16.474091  214740 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I1217 00:37:16.474123  214740 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-004564' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-004564/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-004564' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:37:16.600659  214740 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:37:16.600685  214740 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:37:16.600737  214740 ubuntu.go:190] setting up certificates
	I1217 00:37:16.600748  214740 provision.go:84] configureAuth start
	I1217 00:37:16.600801  214740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-004564
	I1217 00:37:16.621349  214740 provision.go:143] copyHostCerts
	I1217 00:37:16.621415  214740 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:37:16.621429  214740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:37:16.621493  214740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:37:16.621635  214740 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:37:16.621647  214740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:37:16.621673  214740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:37:16.621744  214740 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:37:16.621752  214740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:37:16.621775  214740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:37:16.621845  214740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.pause-004564 san=[127.0.0.1 192.168.85.2 localhost minikube pause-004564]
	I1217 00:37:16.769583  214740 provision.go:177] copyRemoteCerts
	I1217 00:37:16.769652  214740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:37:16.769706  214740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-004564
	I1217 00:37:16.788136  214740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/pause-004564/id_rsa Username:docker}
	I1217 00:37:16.882035  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:37:16.900209  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 00:37:16.916656  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 00:37:16.934684  214740 provision.go:87] duration metric: took 333.914753ms to configureAuth
	I1217 00:37:16.934724  214740 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:37:16.935020  214740 config.go:182] Loaded profile config "pause-004564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:37:16.935124  214740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-004564
	I1217 00:37:16.952944  214740 main.go:143] libmachine: Using SSH client type: native
	I1217 00:37:16.953256  214740 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32978 <nil> <nil>}
	I1217 00:37:16.953292  214740 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:37:19.847288  214740 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:37:19.847313  214740 machine.go:97] duration metric: took 3.693669477s to provisionDockerMachine
	I1217 00:37:19.847332  214740 start.go:293] postStartSetup for "pause-004564" (driver="docker")
	I1217 00:37:19.847345  214740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:37:19.847416  214740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:37:19.847470  214740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-004564
	I1217 00:37:19.869519  214740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/pause-004564/id_rsa Username:docker}
	I1217 00:37:19.965021  214740 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:37:19.969046  214740 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:37:19.969072  214740 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:37:19.969082  214740 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:37:19.969122  214740 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:37:19.969190  214740 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:37:19.969276  214740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:37:19.977369  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:37:19.994941  214740 start.go:296] duration metric: took 147.593732ms for postStartSetup
	I1217 00:37:19.995033  214740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:37:19.995082  214740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-004564
	I1217 00:37:20.014156  214740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/pause-004564/id_rsa Username:docker}
	I1217 00:37:20.106785  214740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:37:20.111391  214740 fix.go:56] duration metric: took 3.979116257s for fixHost
	I1217 00:37:20.111415  214740 start.go:83] releasing machines lock for "pause-004564", held for 3.979173755s
	I1217 00:37:20.111471  214740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-004564
	I1217 00:37:20.130280  214740 ssh_runner.go:195] Run: cat /version.json
	I1217 00:37:20.130336  214740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-004564
	I1217 00:37:20.130359  214740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:37:20.130429  214740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-004564
	I1217 00:37:20.149651  214740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/pause-004564/id_rsa Username:docker}
	I1217 00:37:20.150278  214740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/pause-004564/id_rsa Username:docker}
	I1217 00:37:20.322588  214740 ssh_runner.go:195] Run: systemctl --version
	I1217 00:37:20.329674  214740 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:37:20.370168  214740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:37:20.375433  214740 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:37:20.375494  214740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:37:20.384359  214740 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:37:20.384389  214740 start.go:496] detecting cgroup driver to use...
	I1217 00:37:20.384422  214740 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:37:20.384475  214740 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:37:20.400326  214740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:37:20.413876  214740 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:37:20.413943  214740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:37:20.428818  214740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:37:20.441055  214740 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:37:20.563323  214740 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:37:20.673543  214740 docker.go:234] disabling docker service ...
	I1217 00:37:20.673626  214740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:37:20.687952  214740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:37:20.699555  214740 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:37:20.806184  214740 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:37:20.911088  214740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:37:20.922920  214740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:37:20.936273  214740 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:37:20.936321  214740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:20.944471  214740 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:37:20.944511  214740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:20.952639  214740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:20.960687  214740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:20.968665  214740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:37:20.976017  214740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:20.983949  214740 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:20.991557  214740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:20.999490  214740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:37:21.006585  214740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:37:21.013358  214740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:37:21.113894  214740 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:37:21.280679  214740 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:37:21.280748  214740 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:37:21.284596  214740 start.go:564] Will wait 60s for crictl version
	I1217 00:37:21.284662  214740 ssh_runner.go:195] Run: which crictl
	I1217 00:37:21.287987  214740 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:37:21.311961  214740 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:37:21.312051  214740 ssh_runner.go:195] Run: crio --version
	I1217 00:37:21.338341  214740 ssh_runner.go:195] Run: crio --version
	I1217 00:37:21.365515  214740 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 00:37:21.366761  214740 cli_runner.go:164] Run: docker network inspect pause-004564 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:37:21.383632  214740 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 00:37:21.387730  214740 kubeadm.go:884] updating cluster {Name:pause-004564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-004564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:37:21.387887  214740 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:37:21.387941  214740 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:37:21.419683  214740 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:37:21.419713  214740 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:37:21.419765  214740 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:37:21.442611  214740 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:37:21.442631  214740 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:37:21.442638  214740 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1217 00:37:21.442747  214740 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-004564 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-004564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:37:21.442823  214740 ssh_runner.go:195] Run: crio config
	I1217 00:37:21.485181  214740 cni.go:84] Creating CNI manager for ""
	I1217 00:37:21.485204  214740 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:37:21.485219  214740 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:37:21.485246  214740 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-004564 NodeName:pause-004564 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:37:21.485388  214740 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-004564"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:37:21.485453  214740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 00:37:21.493520  214740 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:37:21.493582  214740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:37:21.501042  214740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1217 00:37:21.513195  214740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 00:37:21.525853  214740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1217 00:37:21.537812  214740 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:37:21.541199  214740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:37:21.648322  214740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:37:21.661734  214740 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564 for IP: 192.168.85.2
	I1217 00:37:21.661756  214740 certs.go:195] generating shared ca certs ...
	I1217 00:37:21.661770  214740 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:37:21.661924  214740 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:37:21.661973  214740 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:37:21.661984  214740 certs.go:257] generating profile certs ...
	I1217 00:37:21.662130  214740 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/client.key
	I1217 00:37:21.662187  214740 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/apiserver.key.40c312f8
	I1217 00:37:21.662240  214740 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/proxy-client.key
	I1217 00:37:21.662343  214740 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:37:21.662382  214740 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:37:21.662398  214740 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:37:21.662426  214740 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:37:21.662459  214740 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:37:21.662490  214740 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:37:21.662548  214740 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:37:21.663177  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:37:21.680387  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:37:21.697202  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:37:21.714133  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:37:21.730900  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 00:37:21.747761  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 00:37:21.765226  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:37:21.782757  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:37:21.800624  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:37:21.818143  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:37:21.834697  214740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:37:21.851180  214740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:37:21.862738  214740 ssh_runner.go:195] Run: openssl version
	I1217 00:37:21.868591  214740 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:37:21.875375  214740 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:37:21.882881  214740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:37:21.886393  214740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:37:21.886430  214740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:37:21.921718  214740 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:37:21.929529  214740 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16354.pem
	I1217 00:37:21.937114  214740 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16354.pem /etc/ssl/certs/16354.pem
	I1217 00:37:21.944010  214740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16354.pem
	I1217 00:37:21.947579  214740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:13 /usr/share/ca-certificates/16354.pem
	I1217 00:37:21.947618  214740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16354.pem
	I1217 00:37:21.982072  214740 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:37:21.989116  214740 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:37:21.996286  214740 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:37:22.003347  214740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:37:22.006887  214740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:37:22.006950  214740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	I1217 00:37:22.040695  214740 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:37:22.047892  214740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:37:22.051504  214740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:37:22.085055  214740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:37:22.118289  214740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:37:22.151209  214740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:37:22.184271  214740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:37:22.217101  214740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:37:22.250187  214740 kubeadm.go:401] StartCluster: {Name:pause-004564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-004564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:37:22.250327  214740 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:37:22.250371  214740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:37:22.276640  214740 cri.go:89] found id: "261675999c5abb3189891f43e3b1b48ba4936a0794d59ca8d49a88c9a6851f5b"
	I1217 00:37:22.276658  214740 cri.go:89] found id: "e75763c0a782db6b82bbcdce6755f2e6f1c075e2577e8608b78caad6bd9e0685"
	I1217 00:37:22.276671  214740 cri.go:89] found id: "312296c55960188c8b0406f7ef3b76b6aa39658e5891d4d5d9cb3e5c5de8a96b"
	I1217 00:37:22.276678  214740 cri.go:89] found id: "2ec6662e3cdaf85a7ed50022d863b9b780287c6ce573ffe5414b6462ad51e698"
	I1217 00:37:22.276683  214740 cri.go:89] found id: "8ad49dd6c20bb9213f368f20baf3c0d05e5d7b019e452f80bf3758ec6690483c"
	I1217 00:37:22.276688  214740 cri.go:89] found id: "822c708ab7dfc212f531b65b81db4f5bc4505719e6a921f57d1232f703310c0a"
	I1217 00:37:22.276693  214740 cri.go:89] found id: "75a2fcfe1b2251781ce5430b7e7160f17def85d218ac84eb522f00d0f2ce3ccb"
	I1217 00:37:22.276698  214740 cri.go:89] found id: ""
	I1217 00:37:22.276743  214740 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 00:37:22.287481  214740 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:37:22Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:37:22.287551  214740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:37:22.294970  214740 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:37:22.295007  214740 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:37:22.295044  214740 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:37:22.301902  214740 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:37:22.302541  214740 kubeconfig.go:125] found "pause-004564" server: "https://192.168.85.2:8443"
	I1217 00:37:22.303468  214740 kapi.go:59] client config for pause-004564: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/client.key", CAFile:"/home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:37:22.303851  214740 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 00:37:22.303871  214740 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 00:37:22.303878  214740 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 00:37:22.303883  214740 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 00:37:22.303889  214740 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 00:37:22.304221  214740 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:37:22.311322  214740 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1217 00:37:22.311348  214740 kubeadm.go:602] duration metric: took 16.335242ms to restartPrimaryControlPlane
	I1217 00:37:22.311358  214740 kubeadm.go:403] duration metric: took 61.180088ms to StartCluster
	I1217 00:37:22.311373  214740 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:37:22.311431  214740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:37:22.312307  214740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:37:22.312533  214740 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:37:22.312587  214740 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:37:22.312760  214740 config.go:182] Loaded profile config "pause-004564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:37:22.315075  214740 out.go:179] * Enabled addons: 
	I1217 00:37:22.315084  214740 out.go:179] * Verifying Kubernetes components...
	I1217 00:37:22.316126  214740 addons.go:530] duration metric: took 3.54579ms for enable addons: enabled=[]
	I1217 00:37:22.316167  214740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:37:22.420520  214740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:37:22.432775  214740 node_ready.go:35] waiting up to 6m0s for node "pause-004564" to be "Ready" ...
	I1217 00:37:22.440178  214740 node_ready.go:49] node "pause-004564" is "Ready"
	I1217 00:37:22.440203  214740 node_ready.go:38] duration metric: took 7.392338ms for node "pause-004564" to be "Ready" ...
	I1217 00:37:22.440216  214740 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:37:22.440262  214740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:37:22.451680  214740 api_server.go:72] duration metric: took 139.121359ms to wait for apiserver process to appear ...
	I1217 00:37:22.451701  214740 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:37:22.451721  214740 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 00:37:22.455335  214740 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 00:37:22.456064  214740 api_server.go:141] control plane version: v1.34.2
	I1217 00:37:22.456082  214740 api_server.go:131] duration metric: took 4.375754ms to wait for apiserver health ...
	I1217 00:37:22.456090  214740 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:37:22.458689  214740 system_pods.go:59] 7 kube-system pods found
	I1217 00:37:22.458718  214740 system_pods.go:61] "coredns-66bc5c9577-9srqt" [20274228-c195-4865-9917-969f5e20ced1] Running
	I1217 00:37:22.458724  214740 system_pods.go:61] "etcd-pause-004564" [21023270-c3c6-4b81-af27-19ee45d40732] Running
	I1217 00:37:22.458727  214740 system_pods.go:61] "kindnet-7hj2r" [ff07be68-3286-4d8e-8ef8-3d79b41d1932] Running
	I1217 00:37:22.458731  214740 system_pods.go:61] "kube-apiserver-pause-004564" [ac3fbc82-ca93-433f-bf43-490a35f17f4e] Running
	I1217 00:37:22.458735  214740 system_pods.go:61] "kube-controller-manager-pause-004564" [0b2d3115-6b57-401b-9dd8-0fd436ecbc04] Running
	I1217 00:37:22.458738  214740 system_pods.go:61] "kube-proxy-42nwb" [da8260dd-d848-423f-a84e-d902bebad105] Running
	I1217 00:37:22.458743  214740 system_pods.go:61] "kube-scheduler-pause-004564" [03c56ab0-b2f5-402f-955a-bf26dd76fd1a] Running
	I1217 00:37:22.458751  214740 system_pods.go:74] duration metric: took 2.656602ms to wait for pod list to return data ...
	I1217 00:37:22.458756  214740 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:37:22.460308  214740 default_sa.go:45] found service account: "default"
	I1217 00:37:22.460324  214740 default_sa.go:55] duration metric: took 1.562793ms for default service account to be created ...
	I1217 00:37:22.460331  214740 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 00:37:22.462416  214740 system_pods.go:86] 7 kube-system pods found
	I1217 00:37:22.462443  214740 system_pods.go:89] "coredns-66bc5c9577-9srqt" [20274228-c195-4865-9917-969f5e20ced1] Running
	I1217 00:37:22.462448  214740 system_pods.go:89] "etcd-pause-004564" [21023270-c3c6-4b81-af27-19ee45d40732] Running
	I1217 00:37:22.462451  214740 system_pods.go:89] "kindnet-7hj2r" [ff07be68-3286-4d8e-8ef8-3d79b41d1932] Running
	I1217 00:37:22.462455  214740 system_pods.go:89] "kube-apiserver-pause-004564" [ac3fbc82-ca93-433f-bf43-490a35f17f4e] Running
	I1217 00:37:22.462458  214740 system_pods.go:89] "kube-controller-manager-pause-004564" [0b2d3115-6b57-401b-9dd8-0fd436ecbc04] Running
	I1217 00:37:22.462461  214740 system_pods.go:89] "kube-proxy-42nwb" [da8260dd-d848-423f-a84e-d902bebad105] Running
	I1217 00:37:22.462465  214740 system_pods.go:89] "kube-scheduler-pause-004564" [03c56ab0-b2f5-402f-955a-bf26dd76fd1a] Running
	I1217 00:37:22.462470  214740 system_pods.go:126] duration metric: took 2.134549ms to wait for k8s-apps to be running ...
	I1217 00:37:22.462479  214740 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 00:37:22.462526  214740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:37:22.474146  214740 system_svc.go:56] duration metric: took 11.663396ms WaitForService to wait for kubelet
	I1217 00:37:22.474169  214740 kubeadm.go:587] duration metric: took 161.612311ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:37:22.474189  214740 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:37:22.475757  214740 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 00:37:22.475776  214740 node_conditions.go:123] node cpu capacity is 8
	I1217 00:37:22.475792  214740 node_conditions.go:105] duration metric: took 1.59607ms to run NodePressure ...
	I1217 00:37:22.475801  214740 start.go:242] waiting for startup goroutines ...
	I1217 00:37:22.475808  214740 start.go:247] waiting for cluster config update ...
	I1217 00:37:22.475815  214740 start.go:256] writing updated cluster config ...
	I1217 00:37:22.476088  214740 ssh_runner.go:195] Run: rm -f paused
	I1217 00:37:22.479453  214740 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:37:22.480042  214740 kapi.go:59] client config for pause-004564: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-12816/.minikube/profiles/pause-004564/client.key", CAFile:"/home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:37:22.482102  214740 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9srqt" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:22.485237  214740 pod_ready.go:94] pod "coredns-66bc5c9577-9srqt" is "Ready"
	I1217 00:37:22.485258  214740 pod_ready.go:86] duration metric: took 3.138722ms for pod "coredns-66bc5c9577-9srqt" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:22.486650  214740 pod_ready.go:83] waiting for pod "etcd-pause-004564" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:22.489557  214740 pod_ready.go:94] pod "etcd-pause-004564" is "Ready"
	I1217 00:37:22.489573  214740 pod_ready.go:86] duration metric: took 2.907346ms for pod "etcd-pause-004564" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:22.491011  214740 pod_ready.go:83] waiting for pod "kube-apiserver-pause-004564" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:22.493985  214740 pod_ready.go:94] pod "kube-apiserver-pause-004564" is "Ready"
	I1217 00:37:22.494025  214740 pod_ready.go:86] duration metric: took 2.996511ms for pod "kube-apiserver-pause-004564" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:22.495417  214740 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-004564" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:22.882759  214740 pod_ready.go:94] pod "kube-controller-manager-pause-004564" is "Ready"
	I1217 00:37:22.882785  214740 pod_ready.go:86] duration metric: took 387.352194ms for pod "kube-controller-manager-pause-004564" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:23.083532  214740 pod_ready.go:83] waiting for pod "kube-proxy-42nwb" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:23.483037  214740 pod_ready.go:94] pod "kube-proxy-42nwb" is "Ready"
	I1217 00:37:23.483064  214740 pod_ready.go:86] duration metric: took 399.507073ms for pod "kube-proxy-42nwb" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:23.682669  214740 pod_ready.go:83] waiting for pod "kube-scheduler-pause-004564" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:24.083039  214740 pod_ready.go:94] pod "kube-scheduler-pause-004564" is "Ready"
	I1217 00:37:24.083063  214740 pod_ready.go:86] duration metric: took 400.371974ms for pod "kube-scheduler-pause-004564" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:37:24.083074  214740 pod_ready.go:40] duration metric: took 1.603595614s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:37:24.124051  214740 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1217 00:37:24.126627  214740 out.go:179] * Done! kubectl is now configured to use "pause-004564" cluster and "default" namespace by default
	I1217 00:37:19.735684  214288 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-803959 --name kubernetes-upgrade-803959 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-803959 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-803959 --network kubernetes-upgrade-803959 --ip 192.168.76.2 --volume kubernetes-upgrade-803959:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 00:37:20.011686  214288 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-803959 --format={{.State.Running}}
	I1217 00:37:20.031100  214288 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-803959 --format={{.State.Status}}
	I1217 00:37:20.049221  214288 cli_runner.go:164] Run: docker exec kubernetes-upgrade-803959 stat /var/lib/dpkg/alternatives/iptables
	I1217 00:37:20.096515  214288 oci.go:144] the created container "kubernetes-upgrade-803959" has a running status.
	I1217 00:37:20.096540  214288 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/kubernetes-upgrade-803959/id_rsa...
	I1217 00:37:20.119063  214288 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-12816/.minikube/machines/kubernetes-upgrade-803959/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 00:37:20.149430  214288 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-803959 --format={{.State.Status}}
	I1217 00:37:20.172660  214288 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 00:37:20.172683  214288 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-803959 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 00:37:20.216510  214288 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-803959 --format={{.State.Status}}
	I1217 00:37:20.237344  214288 machine.go:94] provisionDockerMachine start ...
	I1217 00:37:20.237430  214288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-803959
	I1217 00:37:20.259176  214288 main.go:143] libmachine: Using SSH client type: native
	I1217 00:37:20.259560  214288 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1217 00:37:20.259584  214288 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:37:20.260314  214288 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41214->127.0.0.1:33003: read: connection reset by peer
	I1217 00:37:23.386034  214288 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-803959
	
	I1217 00:37:23.386062  214288 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-803959"
	I1217 00:37:23.386115  214288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-803959
	I1217 00:37:23.404553  214288 main.go:143] libmachine: Using SSH client type: native
	I1217 00:37:23.404832  214288 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1217 00:37:23.404855  214288 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-803959 && echo "kubernetes-upgrade-803959" | sudo tee /etc/hostname
	I1217 00:37:23.539464  214288 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-803959
	
	I1217 00:37:23.539539  214288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-803959
	I1217 00:37:23.556986  214288 main.go:143] libmachine: Using SSH client type: native
	I1217 00:37:23.557224  214288 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1217 00:37:23.557241  214288 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-803959' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-803959/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-803959' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:37:23.684037  214288 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:37:23.684061  214288 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:37:23.684082  214288 ubuntu.go:190] setting up certificates
	I1217 00:37:23.684097  214288 provision.go:84] configureAuth start
	I1217 00:37:23.684146  214288 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-803959
	I1217 00:37:23.702757  214288 provision.go:143] copyHostCerts
	I1217 00:37:23.702840  214288 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:37:23.702856  214288 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:37:23.702940  214288 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:37:23.703083  214288 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:37:23.703097  214288 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:37:23.703141  214288 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:37:23.703245  214288 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:37:23.703256  214288 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:37:23.703291  214288 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:37:23.703388  214288 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-803959 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-803959 localhost minikube]
	I1217 00:37:23.821073  214288 provision.go:177] copyRemoteCerts
	I1217 00:37:23.821135  214288 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:37:23.821166  214288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-803959
	I1217 00:37:23.838853  214288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kubernetes-upgrade-803959/id_rsa Username:docker}
	I1217 00:37:23.930845  214288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:37:23.950601  214288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1217 00:37:23.967643  214288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 00:37:23.984599  214288 provision.go:87] duration metric: took 300.480777ms to configureAuth
	I1217 00:37:23.984627  214288 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:37:23.984834  214288 config.go:182] Loaded profile config "kubernetes-upgrade-803959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 00:37:23.984956  214288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-803959
	I1217 00:37:24.002834  214288 main.go:143] libmachine: Using SSH client type: native
	I1217 00:37:24.003120  214288 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1217 00:37:24.003142  214288 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:37:24.274144  214288 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:37:24.274177  214288 machine.go:97] duration metric: took 4.036809944s to provisionDockerMachine
	I1217 00:37:24.274190  214288 client.go:176] duration metric: took 9.400259039s to LocalClient.Create
	I1217 00:37:24.274209  214288 start.go:167] duration metric: took 9.400321071s to libmachine.API.Create "kubernetes-upgrade-803959"
	I1217 00:37:24.274216  214288 start.go:293] postStartSetup for "kubernetes-upgrade-803959" (driver="docker")
	I1217 00:37:24.274225  214288 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:37:24.274273  214288 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:37:24.274317  214288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-803959
	I1217 00:37:24.294252  214288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kubernetes-upgrade-803959/id_rsa Username:docker}
	I1217 00:37:24.387176  214288 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:37:24.390481  214288 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:37:24.390508  214288 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:37:24.390517  214288 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:37:24.390563  214288 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:37:24.390652  214288 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:37:24.390767  214288 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:37:24.398008  214288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:37:24.417522  214288 start.go:296] duration metric: took 143.296478ms for postStartSetup
	I1217 00:37:24.417813  214288 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-803959
	I1217 00:37:24.435634  214288 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/config.json ...
	I1217 00:37:24.435858  214288 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:37:24.435898  214288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-803959
	I1217 00:37:24.460089  214288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kubernetes-upgrade-803959/id_rsa Username:docker}
	I1217 00:37:24.548666  214288 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:37:24.553649  214288 start.go:128] duration metric: took 9.681473905s to createHost
	I1217 00:37:24.553673  214288 start.go:83] releasing machines lock for "kubernetes-upgrade-803959", held for 9.681612447s
	I1217 00:37:24.553727  214288 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-803959
	I1217 00:37:24.572438  214288 ssh_runner.go:195] Run: cat /version.json
	I1217 00:37:24.572491  214288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-803959
	I1217 00:37:24.572516  214288 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:37:24.572613  214288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-803959
	I1217 00:37:24.592132  214288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kubernetes-upgrade-803959/id_rsa Username:docker}
	I1217 00:37:24.593010  214288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kubernetes-upgrade-803959/id_rsa Username:docker}
	I1217 00:37:24.507076  211439 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 00:37:24.507116  211439 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:37:23.394610  213566 cli_runner.go:164] Run: docker container inspect missing-upgrade-043393 --format={{.State.Status}}
	W1217 00:37:23.412514  213566 cli_runner.go:211] docker container inspect missing-upgrade-043393 --format={{.State.Status}} returned with exit code 1
	I1217 00:37:23.412583  213566 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-043393": docker container inspect missing-upgrade-043393 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-043393
	I1217 00:37:23.412598  213566 oci.go:673] temporary error: container missing-upgrade-043393 status is  but expect it to be exited
	I1217 00:37:23.412634  213566 retry.go:31] will retry after 7.521946715s: couldn't verify container is exited. %v: unknown state "missing-upgrade-043393": docker container inspect missing-upgrade-043393 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-043393
	I1217 00:37:24.735118  214288 ssh_runner.go:195] Run: systemctl --version
	I1217 00:37:24.741360  214288 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:37:24.775927  214288 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:37:24.780968  214288 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:37:24.781050  214288 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:37:24.806625  214288 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 00:37:24.806647  214288 start.go:496] detecting cgroup driver to use...
	I1217 00:37:24.806676  214288 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:37:24.806715  214288 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:37:24.822377  214288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:37:24.834361  214288 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:37:24.834406  214288 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:37:24.850860  214288 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:37:24.875561  214288 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:37:24.969028  214288 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:37:25.064939  214288 docker.go:234] disabling docker service ...
	I1217 00:37:25.065006  214288 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:37:25.082208  214288 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:37:25.095490  214288 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:37:25.175274  214288 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:37:25.253975  214288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:37:25.266329  214288 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:37:25.279486  214288 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1217 00:37:25.279536  214288 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:25.289029  214288 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:37:25.289078  214288 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:25.297239  214288 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:25.305121  214288 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:25.313119  214288 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:37:25.320471  214288 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:25.328187  214288 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:25.341852  214288 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:37:25.349964  214288 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:37:25.357414  214288 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:37:25.364181  214288 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:37:25.439737  214288 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:37:25.568057  214288 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:37:25.568126  214288 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:37:25.571936  214288 start.go:564] Will wait 60s for crictl version
	I1217 00:37:25.571984  214288 ssh_runner.go:195] Run: which crictl
	I1217 00:37:25.575398  214288 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:37:25.600173  214288 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:37:25.600257  214288 ssh_runner.go:195] Run: crio --version
	I1217 00:37:25.633214  214288 ssh_runner.go:195] Run: crio --version
	I1217 00:37:25.661443  214288 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1217 00:37:25.662404  214288 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-803959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:37:25.679646  214288 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 00:37:25.683547  214288 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:37:25.693138  214288 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-803959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-803959 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:37:25.693274  214288 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 00:37:25.693331  214288 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:37:25.725762  214288 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:37:25.725783  214288 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:37:25.725834  214288 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:37:25.752573  214288 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:37:25.752594  214288 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:37:25.752601  214288 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1217 00:37:25.752709  214288 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-803959 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-803959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:37:25.752770  214288 ssh_runner.go:195] Run: crio config
	I1217 00:37:25.799721  214288 cni.go:84] Creating CNI manager for ""
	I1217 00:37:25.799746  214288 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:37:25.799762  214288 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:37:25.799783  214288 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-803959 NodeName:kubernetes-upgrade-803959 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:37:25.799928  214288 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-803959"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:37:25.800007  214288 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1217 00:37:25.807963  214288 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:37:25.808047  214288 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:37:25.815578  214288 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1217 00:37:25.827834  214288 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 00:37:25.841952  214288 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1217 00:37:25.853770  214288 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:37:25.857185  214288 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:37:25.866323  214288 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:37:25.949311  214288 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:37:25.978157  214288 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959 for IP: 192.168.76.2
	I1217 00:37:25.978180  214288 certs.go:195] generating shared ca certs ...
	I1217 00:37:25.978198  214288 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:37:25.978356  214288 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:37:25.978414  214288 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:37:25.978426  214288 certs.go:257] generating profile certs ...
	I1217 00:37:25.978502  214288 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/client.key
	I1217 00:37:25.978524  214288 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/client.crt with IP's: []
	I1217 00:37:26.049276  214288 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/client.crt ...
	I1217 00:37:26.049303  214288 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/client.crt: {Name:mkee8755d595ce714196be6169bdbd6f7de50d84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:37:26.049459  214288 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/client.key ...
	I1217 00:37:26.049472  214288 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/client.key: {Name:mk9c6411ba45dfd3f091d076eb1ab6ee3011744a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:37:26.049567  214288 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/apiserver.key.677aa1db
	I1217 00:37:26.049583  214288 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/apiserver.crt.677aa1db with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 00:37:26.095842  214288 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/apiserver.crt.677aa1db ...
	I1217 00:37:26.095865  214288 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/apiserver.crt.677aa1db: {Name:mke3d04a47f067f1ed3b1519f70cce7185ac7ccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:37:26.096002  214288 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/apiserver.key.677aa1db ...
	I1217 00:37:26.096014  214288 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/apiserver.key.677aa1db: {Name:mkaeff6240fd9e99d2bd8446181797f660e924f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:37:26.096090  214288 certs.go:382] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/apiserver.crt.677aa1db -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/apiserver.crt
	I1217 00:37:26.096159  214288 certs.go:386] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/apiserver.key.677aa1db -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/apiserver.key
	I1217 00:37:26.096213  214288 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/proxy-client.key
	I1217 00:37:26.096227  214288 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/proxy-client.crt with IP's: []
	I1217 00:37:26.172370  214288 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/proxy-client.crt ...
	I1217 00:37:26.172391  214288 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/proxy-client.crt: {Name:mkc1c2b870c62075b1f46ce611e56d4afa7eedfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:37:26.172520  214288 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/proxy-client.key ...
	I1217 00:37:26.172533  214288 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/proxy-client.key: {Name:mkb234914addf660de3781da17a413a3b8f2727d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:37:26.172688  214288 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:37:26.172725  214288 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:37:26.172732  214288 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:37:26.172757  214288 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:37:26.172780  214288 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:37:26.172805  214288 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:37:26.172845  214288 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:37:26.173389  214288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:37:26.190794  214288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:37:26.207110  214288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:37:26.223489  214288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:37:26.239144  214288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1217 00:37:26.254863  214288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 00:37:26.271542  214288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:37:26.287860  214288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:37:26.304319  214288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:37:26.322797  214288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:37:26.339193  214288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:37:26.355362  214288 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:37:26.367379  214288 ssh_runner.go:195] Run: openssl version
	I1217 00:37:26.373232  214288 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:37:26.380600  214288 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:37:26.388525  214288 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:37:26.391953  214288 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:37:26.392011  214288 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:37:26.431400  214288 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:37:26.438756  214288 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 00:37:26.445595  214288 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16354.pem
	I1217 00:37:26.452283  214288 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16354.pem /etc/ssl/certs/16354.pem
	I1217 00:37:26.459170  214288 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16354.pem
	I1217 00:37:26.462601  214288 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:13 /usr/share/ca-certificates/16354.pem
	I1217 00:37:26.462647  214288 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16354.pem
	I1217 00:37:26.496514  214288 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:37:26.504421  214288 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16354.pem /etc/ssl/certs/51391683.0
	I1217 00:37:26.511513  214288 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:37:26.518592  214288 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:37:26.526927  214288 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:37:26.530513  214288 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:37:26.530563  214288 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	I1217 00:37:26.569638  214288 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:37:26.577325  214288 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/163542.pem /etc/ssl/certs/3ec20f2e.0
	I1217 00:37:26.585208  214288 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:37:26.589738  214288 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 00:37:26.589821  214288 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-803959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-803959 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:37:26.589956  214288 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:37:26.590028  214288 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:37:26.618966  214288 cri.go:89] found id: ""
	I1217 00:37:26.619051  214288 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:37:26.626972  214288 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:37:26.634834  214288 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:37:26.634876  214288 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:37:26.642746  214288 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:37:26.642766  214288 kubeadm.go:158] found existing configuration files:
	
	I1217 00:37:26.642800  214288 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 00:37:26.650317  214288 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:37:26.650372  214288 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:37:26.658398  214288 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 00:37:26.666176  214288 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:37:26.666229  214288 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:37:26.673959  214288 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 00:37:26.682745  214288 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:37:26.682801  214288 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:37:26.690635  214288 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 00:37:26.699346  214288 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:37:26.699397  214288 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:37:26.707280  214288 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:37:26.753472  214288 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1217 00:37:26.753534  214288 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:37:26.791916  214288 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:37:26.791976  214288 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 00:37:26.792063  214288 kubeadm.go:319] OS: Linux
	I1217 00:37:26.792136  214288 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:37:26.792176  214288 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:37:26.792216  214288 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:37:26.792281  214288 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:37:26.792334  214288 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:37:26.792408  214288 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:37:26.792480  214288 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:37:26.792549  214288 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 00:37:26.866610  214288 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:37:26.866819  214288 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:37:26.866965  214288 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1217 00:37:27.013958  214288 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.202831072Z" level=info msg="RDT not available in the host system"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.202846431Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.203564748Z" level=info msg="Conmon does support the --sync option"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.203586484Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.203599358Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.204358357Z" level=info msg="Conmon does support the --sync option"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.204372319Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.207867548Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.207899813Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.208455562Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.208787599Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.208849765Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.276615123Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-9srqt Namespace:kube-system ID:9e94d64b900383c07367c4aebb899789be79a11c1f806f060a08a81fae82ceb7 UID:20274228-c195-4865-9917-969f5e20ced1 NetNS:/var/run/netns/8cac6a75-50c6-43fb-8cf6-464b36b4b831 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000732218}] Aliases:map[]}"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.276775427Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-9srqt for CNI network kindnet (type=ptp)"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.277177051Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.277199977Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.27723524Z" level=info msg="Create NRI interface"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.277303217Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.277308667Z" level=info msg="runtime interface created"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.277317574Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.27732269Z" level=info msg="runtime interface starting up..."
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.277327383Z" level=info msg="starting plugins..."
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.277336272Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 00:37:21 pause-004564 crio[2201]: time="2025-12-17T00:37:21.277612672Z" level=info msg="No systemd watchdog enabled"
	Dec 17 00:37:21 pause-004564 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	261675999c5ab       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   15 seconds ago      Running             coredns                   0                   9e94d64b90038       coredns-66bc5c9577-9srqt               kube-system
	e75763c0a782d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   27 seconds ago      Running             kindnet-cni               0                   27daaa464ebaa       kindnet-7hj2r                          kube-system
	312296c559601       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   27 seconds ago      Running             kube-proxy                0                   dde2846856718       kube-proxy-42nwb                       kube-system
	2ec6662e3cdaf       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   38 seconds ago      Running             kube-controller-manager   0                   00fd6dd5f49ea       kube-controller-manager-pause-004564   kube-system
	8ad49dd6c20bb       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   38 seconds ago      Running             kube-apiserver            0                   ad054bb0b24f2       kube-apiserver-pause-004564            kube-system
	822c708ab7dfc       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   38 seconds ago      Running             kube-scheduler            0                   1c74ccab50b6b       kube-scheduler-pause-004564            kube-system
	75a2fcfe1b225       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   38 seconds ago      Running             etcd                      0                   b8f5ca0b0e943       etcd-pause-004564                      kube-system
	
	
	==> coredns [261675999c5abb3189891f43e3b1b48ba4936a0794d59ca8d49a88c9a6851f5b] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55164 - 4267 "HINFO IN 5990849036606057374.3411692580121151747. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.534861589s
	
	
	==> describe nodes <==
	Name:               pause-004564
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-004564
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=pause-004564
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_36_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:36:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-004564
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:37:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:37:12 +0000   Wed, 17 Dec 2025 00:36:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:37:12 +0000   Wed, 17 Dec 2025 00:36:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:37:12 +0000   Wed, 17 Dec 2025 00:36:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 00:37:12 +0000   Wed, 17 Dec 2025 00:37:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-004564
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                f5676ac5-495b-4003-91d5-52b86dab3f6f
	  Boot ID:                    0e9cedc6-c46e-4354-b3d2-9272a8b33ae5
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-9srqt                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-pause-004564                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-7hj2r                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-pause-004564             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-pause-004564    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-42nwb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-pause-004564             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 34s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s   kubelet          Node pause-004564 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s   kubelet          Node pause-004564 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s   kubelet          Node pause-004564 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node pause-004564 event: Registered Node pause-004564 in Controller
	  Normal  NodeReady                17s   kubelet          Node pause-004564 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.089382] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024236] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.864694] kauditd_printk_skb: 47 callbacks suppressed
	[Dec17 00:07] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.006904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +2.048755] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +4.030595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +8.447143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[ +16.382404] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000015] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[Dec17 00:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	
	
	==> etcd [75a2fcfe1b2251781ce5430b7e7160f17def85d218ac84eb522f00d0f2ce3ccb] <==
	{"level":"warn","ts":"2025-12-17T00:36:52.506177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.521812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.538384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.553220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.566369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.577451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.590472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.601524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.616015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.646815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.657793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.671875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.684544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.692838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.699787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.708310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.715836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.723929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.733370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.741616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.752136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.761743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.780592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.792464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:36:52.894318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50452","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:37:29 up  1:19,  0 user,  load average: 3.84, 1.97, 1.36
	Linux pause-004564 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e75763c0a782db6b82bbcdce6755f2e6f1c075e2577e8608b78caad6bd9e0685] <==
	I1217 00:37:02.043647       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 00:37:02.043906       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1217 00:37:02.047096       1 main.go:148] setting mtu 1500 for CNI 
	I1217 00:37:02.047128       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 00:37:02.047150       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T00:37:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 00:37:02.244623       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 00:37:02.245299       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 00:37:02.245315       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 00:37:02.245604       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 00:37:02.641574       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 00:37:02.641625       1 metrics.go:72] Registering metrics
	I1217 00:37:02.641739       1 controller.go:711] "Syncing nftables rules"
	I1217 00:37:12.245062       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 00:37:12.245141       1 main.go:301] handling current node
	I1217 00:37:22.251242       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 00:37:22.251275       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8ad49dd6c20bb9213f368f20baf3c0d05e5d7b019e452f80bf3758ec6690483c] <==
	I1217 00:36:53.651211       1 policy_source.go:240] refreshing policies
	E1217 00:36:53.691350       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1217 00:36:53.693473       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 00:36:53.696408       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:36:53.698604       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1217 00:36:53.709139       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:36:53.710099       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 00:36:53.833727       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 00:36:54.497137       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1217 00:36:54.501724       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1217 00:36:54.501742       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 00:36:54.939953       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 00:36:54.975874       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 00:36:55.093438       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 00:36:55.106232       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1217 00:36:55.107497       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 00:36:55.119400       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 00:36:55.552524       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 00:36:55.858741       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 00:36:55.867547       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 00:36:55.873821       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 00:37:01.255538       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:37:01.258537       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:37:01.404780       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1217 00:37:01.603308       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2ec6662e3cdaf85a7ed50022d863b9b780287c6ce573ffe5414b6462ad51e698] <==
	I1217 00:37:00.550267       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 00:37:00.551425       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 00:37:00.551438       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1217 00:37:00.551575       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1217 00:37:00.551581       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1217 00:37:00.551605       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 00:37:00.551772       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 00:37:00.552326       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 00:37:00.552338       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1217 00:37:00.552591       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 00:37:00.553839       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 00:37:00.553931       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 00:37:00.553973       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 00:37:00.554053       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 00:37:00.555702       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 00:37:00.556477       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1217 00:37:00.556540       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1217 00:37:00.556654       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 00:37:00.556676       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 00:37:00.556683       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 00:37:00.559433       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 00:37:00.563209       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-004564" podCIDRs=["10.244.0.0/24"]
	I1217 00:37:00.572958       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 00:37:00.576071       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 00:37:15.484035       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [312296c55960188c8b0406f7ef3b76b6aa39658e5891d4d5d9cb3e5c5de8a96b] <==
	I1217 00:37:01.830482       1 server_linux.go:53] "Using iptables proxy"
	I1217 00:37:01.907264       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 00:37:02.007680       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 00:37:02.007735       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1217 00:37:02.007836       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 00:37:02.028897       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 00:37:02.029067       1 server_linux.go:132] "Using iptables Proxier"
	I1217 00:37:02.034659       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 00:37:02.034944       1 server.go:527] "Version info" version="v1.34.2"
	I1217 00:37:02.034965       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:37:02.036592       1 config.go:309] "Starting node config controller"
	I1217 00:37:02.036610       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 00:37:02.036617       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 00:37:02.036724       1 config.go:200] "Starting service config controller"
	I1217 00:37:02.036733       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 00:37:02.036755       1 config.go:106] "Starting endpoint slice config controller"
	I1217 00:37:02.036760       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 00:37:02.036780       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 00:37:02.036792       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 00:37:02.137305       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 00:37:02.137331       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 00:37:02.137303       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [822c708ab7dfc212f531b65b81db4f5bc4505719e6a921f57d1232f703310c0a] <==
	E1217 00:36:53.592897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 00:36:53.593005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 00:36:53.593413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 00:36:53.593470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 00:36:53.593511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 00:36:53.593554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 00:36:53.594300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 00:36:53.594384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 00:36:53.595513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 00:36:53.594645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 00:36:53.594695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 00:36:53.594733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 00:36:53.594754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 00:36:53.595104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 00:36:53.594586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 00:36:53.601668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 00:36:54.428558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 00:36:54.439982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1217 00:36:54.474254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 00:36:54.521787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 00:36:54.578947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 00:36:54.606777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 00:36:54.743335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 00:36:54.746434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1217 00:36:56.884929       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 00:37:17 pause-004564 kubelet[1323]: E1217 00:37:17.712238    1323 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 00:37:17 pause-004564 kubelet[1323]: E1217 00:37:17.712255    1323 kubelet.go:2614] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 00:37:17 pause-004564 kubelet[1323]: E1217 00:37:17.770561    1323 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 17 00:37:17 pause-004564 kubelet[1323]: E1217 00:37:17.770642    1323 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 00:37:17 pause-004564 kubelet[1323]: E1217 00:37:17.770665    1323 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 00:37:17 pause-004564 kubelet[1323]: W1217 00:37:17.812859    1323 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 17 00:37:17 pause-004564 kubelet[1323]: W1217 00:37:17.987064    1323 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 17 00:37:18 pause-004564 kubelet[1323]: W1217 00:37:18.256881    1323 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 17 00:37:18 pause-004564 kubelet[1323]: W1217 00:37:18.626148    1323 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 17 00:37:18 pause-004564 kubelet[1323]: E1217 00:37:18.771597    1323 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 17 00:37:18 pause-004564 kubelet[1323]: E1217 00:37:18.771645    1323 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 00:37:18 pause-004564 kubelet[1323]: E1217 00:37:18.771656    1323 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 00:37:19 pause-004564 kubelet[1323]: W1217 00:37:19.411246    1323 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 17 00:37:19 pause-004564 kubelet[1323]: E1217 00:37:19.711988    1323 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Dec 17 00:37:19 pause-004564 kubelet[1323]: E1217 00:37:19.712135    1323 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 00:37:19 pause-004564 kubelet[1323]: E1217 00:37:19.712161    1323 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 00:37:19 pause-004564 kubelet[1323]: E1217 00:37:19.712194    1323 kubelet.go:2614] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 00:37:19 pause-004564 kubelet[1323]: E1217 00:37:19.772657    1323 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 17 00:37:19 pause-004564 kubelet[1323]: E1217 00:37:19.772722    1323 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 00:37:19 pause-004564 kubelet[1323]: E1217 00:37:19.772747    1323 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 17 00:37:24 pause-004564 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 00:37:24 pause-004564 kubelet[1323]: I1217 00:37:24.528554    1323 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 17 00:37:24 pause-004564 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 00:37:24 pause-004564 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:37:24 pause-004564 systemd[1]: kubelet.service: Consumed 1.138s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-004564 -n pause-004564
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-004564 -n pause-004564: exit status 2 (319.732427ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-004564 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-742860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-742860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (258.61441ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:41:16Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-742860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-742860 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-742860 describe deploy/metrics-server -n kube-system: exit status 1 (53.58313ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-742860 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-742860
helpers_test.go:244: (dbg) docker inspect old-k8s-version-742860:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f3317a25ba0dd672e7c7b2056cadfb4682b7ff2475d42648d9662ef39b8f59b",
	        "Created": "2025-12-17T00:40:24.632786552Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 261489,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:40:24.668771479Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/5f3317a25ba0dd672e7c7b2056cadfb4682b7ff2475d42648d9662ef39b8f59b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f3317a25ba0dd672e7c7b2056cadfb4682b7ff2475d42648d9662ef39b8f59b/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f3317a25ba0dd672e7c7b2056cadfb4682b7ff2475d42648d9662ef39b8f59b/hosts",
	        "LogPath": "/var/lib/docker/containers/5f3317a25ba0dd672e7c7b2056cadfb4682b7ff2475d42648d9662ef39b8f59b/5f3317a25ba0dd672e7c7b2056cadfb4682b7ff2475d42648d9662ef39b8f59b-json.log",
	        "Name": "/old-k8s-version-742860",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-742860:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-742860",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5f3317a25ba0dd672e7c7b2056cadfb4682b7ff2475d42648d9662ef39b8f59b",
	                "LowerDir": "/var/lib/docker/overlay2/b3872e7dcb375ce53f1001878e7871d4e0b55db5e9e018b728e1b163a393d733-init/diff:/var/lib/docker/overlay2/594b812fd6d8db89dab322ea9e00d43dd555e9709fb5e6953e3873cce717392c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3872e7dcb375ce53f1001878e7871d4e0b55db5e9e018b728e1b163a393d733/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3872e7dcb375ce53f1001878e7871d4e0b55db5e9e018b728e1b163a393d733/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3872e7dcb375ce53f1001878e7871d4e0b55db5e9e018b728e1b163a393d733/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-742860",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-742860/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-742860",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-742860",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-742860",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3a23337919d379f70835ba10669857e97866ed3bb103cfe629e31a94ff9d2288",
	            "SandboxKey": "/var/run/docker/netns/3a23337919d3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-742860": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "831a77a99d636c5f3163f99f25c807a931c002c29f68db2779eee3263784692b",
	                    "EndpointID": "b754eddd0d8a34f3e696d984f2d81eda5710cdb6301d6e8ae06590fa66898768",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "5a:06:1e:73:58:bf",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-742860",
	                        "5f3317a25ba0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-742860 -n old-k8s-version-742860
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-742860 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-742860 logs -n 25: (1.067965446s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-802249 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-802249             │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │                     │
	│ ssh     │ -p cilium-802249 sudo containerd config dump                                                                                                                                                                                                  │ cilium-802249             │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │                     │
	│ ssh     │ -p cilium-802249 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-802249             │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │                     │
	│ ssh     │ -p cilium-802249 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-802249             │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │                     │
	│ ssh     │ -p cilium-802249 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-802249             │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │                     │
	│ ssh     │ -p cilium-802249 sudo crio config                                                                                                                                                                                                             │ cilium-802249             │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │                     │
	│ delete  │ -p cilium-802249                                                                                                                                                                                                                              │ cilium-802249             │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ start   │ -p cert-expiration-753607 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-753607    │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ start   │ -p NoKubernetes-375259 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-375259       │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ delete  │ -p NoKubernetes-375259                                                                                                                                                                                                                        │ NoKubernetes-375259       │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ start   │ -p NoKubernetes-375259 --no-kubernetes --cpus=1 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                │ NoKubernetes-375259       │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:39 UTC │
	│ ssh     │ -p NoKubernetes-375259 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-375259       │ jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │                     │
	│ stop    │ -p NoKubernetes-375259                                                                                                                                                                                                                        │ NoKubernetes-375259       │ jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ start   │ -p NoKubernetes-375259 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-375259       │ jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ ssh     │ -p NoKubernetes-375259 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-375259       │ jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │                     │
	│ delete  │ -p NoKubernetes-375259                                                                                                                                                                                                                        │ NoKubernetes-375259       │ jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ start   │ -p force-systemd-flag-452634 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-452634 │ jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ ssh     │ force-systemd-flag-452634 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-452634 │ jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ delete  │ -p force-systemd-flag-452634                                                                                                                                                                                                                  │ force-systemd-flag-452634 │ jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ start   │ -p cert-options-636512 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-636512       │ jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh     │ cert-options-636512 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-636512       │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh     │ -p cert-options-636512 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-636512       │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ delete  │ -p cert-options-636512                                                                                                                                                                                                                        │ cert-options-636512       │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ start   │ -p old-k8s-version-742860 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-742860    │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:41 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-742860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-742860    │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:40:18
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:40:18.929676  260378 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:40:18.929916  260378 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:40:18.929924  260378 out.go:374] Setting ErrFile to fd 2...
	I1217 00:40:18.929928  260378 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:40:18.930126  260378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:40:18.930561  260378 out.go:368] Setting JSON to false
	I1217 00:40:18.931576  260378 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4969,"bootTime":1765927050,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:40:18.931626  260378 start.go:143] virtualization: kvm guest
	I1217 00:40:18.933452  260378 out.go:179] * [old-k8s-version-742860] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:40:18.935039  260378 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:40:18.935045  260378 notify.go:221] Checking for updates...
	I1217 00:40:18.937129  260378 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:40:18.938351  260378 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:40:18.939389  260378 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:40:18.940413  260378 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:40:18.944467  260378 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:40:18.946007  260378 config.go:182] Loaded profile config "cert-expiration-753607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:40:18.946113  260378 config.go:182] Loaded profile config "kubernetes-upgrade-803959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:40:18.946206  260378 config.go:182] Loaded profile config "stopped-upgrade-028618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1217 00:40:18.946283  260378 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:40:18.970173  260378 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:40:18.970256  260378 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:40:19.021917  260378 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-17 00:40:19.012098513 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:40:19.022064  260378 docker.go:319] overlay module found
	I1217 00:40:19.023872  260378 out.go:179] * Using the docker driver based on user configuration
	I1217 00:40:19.024997  260378 start.go:309] selected driver: docker
	I1217 00:40:19.025011  260378 start.go:927] validating driver "docker" against <nil>
	I1217 00:40:19.025026  260378 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:40:19.025691  260378 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:40:19.084013  260378 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-17 00:40:19.073884735 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:40:19.084183  260378 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:40:19.084368  260378 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:40:19.085845  260378 out.go:179] * Using Docker driver with root privileges
	I1217 00:40:19.086976  260378 cni.go:84] Creating CNI manager for ""
	I1217 00:40:19.087060  260378 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:40:19.087075  260378 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 00:40:19.087135  260378 start.go:353] cluster config:
	{Name:old-k8s-version-742860 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-742860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:40:19.088375  260378 out.go:179] * Starting "old-k8s-version-742860" primary control-plane node in "old-k8s-version-742860" cluster
	I1217 00:40:19.089537  260378 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:40:19.090618  260378 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:40:19.091641  260378 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 00:40:19.091678  260378 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1217 00:40:19.091690  260378 cache.go:65] Caching tarball of preloaded images
	I1217 00:40:19.091756  260378 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:40:19.091800  260378 preload.go:238] Found /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 00:40:19.091813  260378 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1217 00:40:19.091913  260378 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/config.json ...
	I1217 00:40:19.091932  260378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/config.json: {Name:mk3af66dc0d3e9902632f9a0b3d28affb7556237 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:40:19.113107  260378 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:40:19.113129  260378 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:40:19.113145  260378 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:40:19.113175  260378 start.go:360] acquireMachinesLock for old-k8s-version-742860: {Name:mk23fb8e24185f6cdecffcd5d99d17b63fa59954 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:40:19.113282  260378 start.go:364] duration metric: took 88.812µs to acquireMachinesLock for "old-k8s-version-742860"
	I1217 00:40:19.113311  260378 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-742860 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-742860 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:40:19.113397  260378 start.go:125] createHost starting for "" (driver="docker")
	I1217 00:40:17.723116  211439 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.059805286s)
	W1217 00:40:17.723152  211439 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1217 00:40:17.723163  211439 logs.go:123] Gathering logs for kube-apiserver [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232] ...
	I1217 00:40:17.723181  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:17.760210  211439 logs.go:123] Gathering logs for kube-apiserver [2643ce80e6a45d8725eec137c49854f3d253b366c2696ae74ede6fcd6a30cde5] ...
	I1217 00:40:17.760237  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2643ce80e6a45d8725eec137c49854f3d253b366c2696ae74ede6fcd6a30cde5"
	I1217 00:40:17.798236  211439 logs.go:123] Gathering logs for kube-scheduler [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb] ...
	I1217 00:40:17.798275  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:17.868012  211439 logs.go:123] Gathering logs for kube-controller-manager [1365686a06e447d9ba4b06099dc6a34f3e72603f4dc72891bcfd26a9a7d3147b] ...
	I1217 00:40:17.868041  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1365686a06e447d9ba4b06099dc6a34f3e72603f4dc72891bcfd26a9a7d3147b"
	I1217 00:40:17.904498  211439 logs.go:123] Gathering logs for container status ...
	I1217 00:40:17.904524  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:40:17.945420  211439 logs.go:123] Gathering logs for kube-controller-manager [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524] ...
	I1217 00:40:17.945445  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:17.982175  211439 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:40:17.982202  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:40:18.041067  224114 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 00:40:18.041534  224114 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 00:40:18.041592  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:40:18.041641  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:40:18.068712  224114 cri.go:89] found id: "bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10"
	I1217 00:40:18.068736  224114 cri.go:89] found id: ""
	I1217 00:40:18.068746  224114 logs.go:282] 1 containers: [bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10]
	I1217 00:40:18.068797  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:18.072781  224114 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:40:18.072832  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:40:18.099676  224114 cri.go:89] found id: ""
	I1217 00:40:18.099696  224114 logs.go:282] 0 containers: []
	W1217 00:40:18.099704  224114 logs.go:284] No container was found matching "etcd"
	I1217 00:40:18.099709  224114 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:40:18.099760  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:40:18.125958  224114 cri.go:89] found id: ""
	I1217 00:40:18.125983  224114 logs.go:282] 0 containers: []
	W1217 00:40:18.126020  224114 logs.go:284] No container was found matching "coredns"
	I1217 00:40:18.126029  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:40:18.126077  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:40:18.151547  224114 cri.go:89] found id: "935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:40:18.151573  224114 cri.go:89] found id: ""
	I1217 00:40:18.151580  224114 logs.go:282] 1 containers: [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a]
	I1217 00:40:18.151629  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:18.155464  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:40:18.155522  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:40:18.181088  224114 cri.go:89] found id: ""
	I1217 00:40:18.181111  224114 logs.go:282] 0 containers: []
	W1217 00:40:18.181120  224114 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:40:18.181130  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:40:18.181176  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:40:18.209029  224114 cri.go:89] found id: "dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:40:18.209049  224114 cri.go:89] found id: "9f29095553aaa04499edb2de3b164c530f807c89ee4e024ffe22939cdfaecbda"
	I1217 00:40:18.209053  224114 cri.go:89] found id: ""
	I1217 00:40:18.209061  224114 logs.go:282] 2 containers: [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829 9f29095553aaa04499edb2de3b164c530f807c89ee4e024ffe22939cdfaecbda]
	I1217 00:40:18.209128  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:18.213036  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:18.216575  224114 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:40:18.216629  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:40:18.241692  224114 cri.go:89] found id: ""
	I1217 00:40:18.241711  224114 logs.go:282] 0 containers: []
	W1217 00:40:18.241717  224114 logs.go:284] No container was found matching "kindnet"
	I1217 00:40:18.241725  224114 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:40:18.241767  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:40:18.269568  224114 cri.go:89] found id: ""
	I1217 00:40:18.269592  224114 logs.go:282] 0 containers: []
	W1217 00:40:18.269603  224114 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:40:18.269619  224114 logs.go:123] Gathering logs for kubelet ...
	I1217 00:40:18.269638  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:40:18.370416  224114 logs.go:123] Gathering logs for dmesg ...
	I1217 00:40:18.370445  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:40:18.385965  224114 logs.go:123] Gathering logs for kube-apiserver [bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10] ...
	I1217 00:40:18.386005  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10"
	I1217 00:40:18.417515  224114 logs.go:123] Gathering logs for kube-scheduler [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a] ...
	I1217 00:40:18.417541  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:40:18.445134  224114 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:40:18.445162  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:40:18.506660  224114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:40:18.506681  224114 logs.go:123] Gathering logs for kube-controller-manager [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829] ...
	I1217 00:40:18.506695  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:40:18.533016  224114 logs.go:123] Gathering logs for kube-controller-manager [9f29095553aaa04499edb2de3b164c530f807c89ee4e024ffe22939cdfaecbda] ...
	I1217 00:40:18.533044  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9f29095553aaa04499edb2de3b164c530f807c89ee4e024ffe22939cdfaecbda"
	I1217 00:40:18.560167  224114 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:40:18.560195  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:40:18.620232  224114 logs.go:123] Gathering logs for container status ...
	I1217 00:40:18.620261  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:40:21.153059  224114 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 00:40:21.153585  224114 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 00:40:21.153654  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:40:21.153720  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:40:21.182152  224114 cri.go:89] found id: "bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10"
	I1217 00:40:21.182174  224114 cri.go:89] found id: ""
	I1217 00:40:21.182182  224114 logs.go:282] 1 containers: [bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10]
	I1217 00:40:21.182240  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:21.186573  224114 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:40:21.186635  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:40:21.214575  224114 cri.go:89] found id: ""
	I1217 00:40:21.214600  224114 logs.go:282] 0 containers: []
	W1217 00:40:21.214611  224114 logs.go:284] No container was found matching "etcd"
	I1217 00:40:21.214618  224114 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:40:21.214672  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:40:21.244086  224114 cri.go:89] found id: ""
	I1217 00:40:21.244114  224114 logs.go:282] 0 containers: []
	W1217 00:40:21.244126  224114 logs.go:284] No container was found matching "coredns"
	I1217 00:40:21.244133  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:40:21.244192  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:40:21.275641  224114 cri.go:89] found id: "935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:40:21.275667  224114 cri.go:89] found id: ""
	I1217 00:40:21.275677  224114 logs.go:282] 1 containers: [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a]
	I1217 00:40:21.275744  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:21.279703  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:40:21.279767  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:40:21.307879  224114 cri.go:89] found id: ""
	I1217 00:40:21.307901  224114 logs.go:282] 0 containers: []
	W1217 00:40:21.307909  224114 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:40:21.307915  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:40:21.307974  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:40:21.335291  224114 cri.go:89] found id: "dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:40:21.335315  224114 cri.go:89] found id: "9f29095553aaa04499edb2de3b164c530f807c89ee4e024ffe22939cdfaecbda"
	I1217 00:40:21.335322  224114 cri.go:89] found id: ""
	I1217 00:40:21.335331  224114 logs.go:282] 2 containers: [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829 9f29095553aaa04499edb2de3b164c530f807c89ee4e024ffe22939cdfaecbda]
	I1217 00:40:21.335390  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:21.339377  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:21.342978  224114 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:40:21.343046  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:40:21.370287  224114 cri.go:89] found id: ""
	I1217 00:40:21.370313  224114 logs.go:282] 0 containers: []
	W1217 00:40:21.370322  224114 logs.go:284] No container was found matching "kindnet"
	I1217 00:40:21.370328  224114 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:40:21.370393  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:40:21.399945  224114 cri.go:89] found id: ""
	I1217 00:40:21.399973  224114 logs.go:282] 0 containers: []
	W1217 00:40:21.399984  224114 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:40:21.400015  224114 logs.go:123] Gathering logs for kubelet ...
	I1217 00:40:21.400029  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:40:21.493292  224114 logs.go:123] Gathering logs for dmesg ...
	I1217 00:40:21.493323  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:40:21.508675  224114 logs.go:123] Gathering logs for kube-apiserver [bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10] ...
	I1217 00:40:21.508698  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10"
	I1217 00:40:21.537719  224114 logs.go:123] Gathering logs for kube-controller-manager [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829] ...
	I1217 00:40:21.537748  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:40:21.567546  224114 logs.go:123] Gathering logs for kube-controller-manager [9f29095553aaa04499edb2de3b164c530f807c89ee4e024ffe22939cdfaecbda] ...
	I1217 00:40:21.567572  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9f29095553aaa04499edb2de3b164c530f807c89ee4e024ffe22939cdfaecbda"
	I1217 00:40:21.597207  224114 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:40:21.597232  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:40:21.658513  224114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:40:21.658536  224114 logs.go:123] Gathering logs for kube-scheduler [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a] ...
	I1217 00:40:21.658551  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:40:21.687211  224114 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:40:21.687237  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:40:21.739357  224114 logs.go:123] Gathering logs for container status ...
	I1217 00:40:21.739386  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:40:19.115148  260378 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 00:40:19.115359  260378 start.go:159] libmachine.API.Create for "old-k8s-version-742860" (driver="docker")
	I1217 00:40:19.115392  260378 client.go:173] LocalClient.Create starting
	I1217 00:40:19.115461  260378 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem
	I1217 00:40:19.115490  260378 main.go:143] libmachine: Decoding PEM data...
	I1217 00:40:19.115508  260378 main.go:143] libmachine: Parsing certificate...
	I1217 00:40:19.115561  260378 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem
	I1217 00:40:19.115582  260378 main.go:143] libmachine: Decoding PEM data...
	I1217 00:40:19.115594  260378 main.go:143] libmachine: Parsing certificate...
	I1217 00:40:19.115914  260378 cli_runner.go:164] Run: docker network inspect old-k8s-version-742860 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 00:40:19.132943  260378 cli_runner.go:211] docker network inspect old-k8s-version-742860 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 00:40:19.133074  260378 network_create.go:284] running [docker network inspect old-k8s-version-742860] to gather additional debugging logs...
	I1217 00:40:19.133098  260378 cli_runner.go:164] Run: docker network inspect old-k8s-version-742860
	W1217 00:40:19.149391  260378 cli_runner.go:211] docker network inspect old-k8s-version-742860 returned with exit code 1
	I1217 00:40:19.149417  260378 network_create.go:287] error running [docker network inspect old-k8s-version-742860]: docker network inspect old-k8s-version-742860: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-742860 not found
	I1217 00:40:19.149443  260378 network_create.go:289] output of [docker network inspect old-k8s-version-742860]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-742860 not found
	
	** /stderr **
	I1217 00:40:19.149573  260378 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:40:19.169033  260378 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ffd1d738f01 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3d:52:75:47:82} reservation:<nil>}
	I1217 00:40:19.169725  260378 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-280edd437675 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:ae:02:b5:f9:a6} reservation:<nil>}
	I1217 00:40:19.170410  260378 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9f28d049043c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:3f:8e:e9:44:56} reservation:<nil>}
	I1217 00:40:19.171205  260378 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-314a6511b83e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a2:a0:70:2f:76:16} reservation:<nil>}
	I1217 00:40:19.171735  260378 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-677a76932866 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:fe:67:4f:73:54:d4} reservation:<nil>}
	I1217 00:40:19.172612  260378 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f4d360}
	I1217 00:40:19.172642  260378 network_create.go:124] attempt to create docker network old-k8s-version-742860 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1217 00:40:19.172693  260378 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-742860 old-k8s-version-742860
	I1217 00:40:19.218303  260378 network_create.go:108] docker network old-k8s-version-742860 192.168.94.0/24 created
	I1217 00:40:19.218333  260378 kic.go:121] calculated static IP "192.168.94.2" for the "old-k8s-version-742860" container
	I1217 00:40:19.218395  260378 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 00:40:19.235549  260378 cli_runner.go:164] Run: docker volume create old-k8s-version-742860 --label name.minikube.sigs.k8s.io=old-k8s-version-742860 --label created_by.minikube.sigs.k8s.io=true
	I1217 00:40:19.253046  260378 oci.go:103] Successfully created a docker volume old-k8s-version-742860
	I1217 00:40:19.253140  260378 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-742860-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-742860 --entrypoint /usr/bin/test -v old-k8s-version-742860:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 00:40:19.627525  260378 oci.go:107] Successfully prepared a docker volume old-k8s-version-742860
	I1217 00:40:19.627598  260378 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 00:40:19.627621  260378 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 00:40:19.627674  260378 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-742860:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 00:40:20.538110  211439 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:40:22.025211  211439 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": read tcp 192.168.103.1:36554->192.168.103.2:8443: read: connection reset by peer
	I1217 00:40:22.025305  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:40:22.025382  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:40:22.059920  211439 cri.go:89] found id: "81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:22.059938  211439 cri.go:89] found id: "2643ce80e6a45d8725eec137c49854f3d253b366c2696ae74ede6fcd6a30cde5"
	I1217 00:40:22.059942  211439 cri.go:89] found id: ""
	I1217 00:40:22.059949  211439 logs.go:282] 2 containers: [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232 2643ce80e6a45d8725eec137c49854f3d253b366c2696ae74ede6fcd6a30cde5]
	I1217 00:40:22.060011  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:22.063917  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:22.067509  211439 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:40:22.067564  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:40:22.102755  211439 cri.go:89] found id: ""
	I1217 00:40:22.102785  211439 logs.go:282] 0 containers: []
	W1217 00:40:22.102796  211439 logs.go:284] No container was found matching "etcd"
	I1217 00:40:22.102803  211439 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:40:22.102858  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:40:22.134926  211439 cri.go:89] found id: ""
	I1217 00:40:22.134961  211439 logs.go:282] 0 containers: []
	W1217 00:40:22.134973  211439 logs.go:284] No container was found matching "coredns"
	I1217 00:40:22.134987  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:40:22.135082  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:40:22.168390  211439 cri.go:89] found id: "4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:22.168413  211439 cri.go:89] found id: ""
	I1217 00:40:22.168420  211439 logs.go:282] 1 containers: [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb]
	I1217 00:40:22.168469  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:22.172091  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:40:22.172144  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:40:22.204862  211439 cri.go:89] found id: ""
	I1217 00:40:22.204888  211439 logs.go:282] 0 containers: []
	W1217 00:40:22.204898  211439 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:40:22.204904  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:40:22.204951  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:40:22.239557  211439 cri.go:89] found id: "a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:22.239577  211439 cri.go:89] found id: "1365686a06e447d9ba4b06099dc6a34f3e72603f4dc72891bcfd26a9a7d3147b"
	I1217 00:40:22.239581  211439 cri.go:89] found id: ""
	I1217 00:40:22.239588  211439 logs.go:282] 2 containers: [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524 1365686a06e447d9ba4b06099dc6a34f3e72603f4dc72891bcfd26a9a7d3147b]
	I1217 00:40:22.239640  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:22.243328  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:22.246687  211439 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:40:22.246739  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:40:22.279244  211439 cri.go:89] found id: ""
	I1217 00:40:22.279266  211439 logs.go:282] 0 containers: []
	W1217 00:40:22.279275  211439 logs.go:284] No container was found matching "kindnet"
	I1217 00:40:22.279280  211439 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:40:22.279322  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:40:22.313466  211439 cri.go:89] found id: ""
	I1217 00:40:22.313494  211439 logs.go:282] 0 containers: []
	W1217 00:40:22.313506  211439 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:40:22.313529  211439 logs.go:123] Gathering logs for kubelet ...
	I1217 00:40:22.313549  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:40:22.402947  211439 logs.go:123] Gathering logs for dmesg ...
	I1217 00:40:22.402979  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:40:22.417945  211439 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:40:22.417966  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:40:22.475081  211439 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:40:22.475104  211439 logs.go:123] Gathering logs for kube-apiserver [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232] ...
	I1217 00:40:22.475115  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:22.512547  211439 logs.go:123] Gathering logs for kube-scheduler [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb] ...
	I1217 00:40:22.512574  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:22.584871  211439 logs.go:123] Gathering logs for kube-controller-manager [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524] ...
	I1217 00:40:22.584898  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:22.617941  211439 logs.go:123] Gathering logs for container status ...
	I1217 00:40:22.617962  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:40:22.654516  211439 logs.go:123] Gathering logs for kube-apiserver [2643ce80e6a45d8725eec137c49854f3d253b366c2696ae74ede6fcd6a30cde5] ...
	I1217 00:40:22.654544  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2643ce80e6a45d8725eec137c49854f3d253b366c2696ae74ede6fcd6a30cde5"
	I1217 00:40:22.690402  211439 logs.go:123] Gathering logs for kube-controller-manager [1365686a06e447d9ba4b06099dc6a34f3e72603f4dc72891bcfd26a9a7d3147b] ...
	I1217 00:40:22.690434  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1365686a06e447d9ba4b06099dc6a34f3e72603f4dc72891bcfd26a9a7d3147b"
	I1217 00:40:22.724140  211439 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:40:22.724166  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:40:24.274887  224114 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 00:40:24.275337  224114 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 00:40:24.275385  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:40:24.275429  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:40:24.301967  224114 cri.go:89] found id: "bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10"
	I1217 00:40:24.302004  224114 cri.go:89] found id: ""
	I1217 00:40:24.302015  224114 logs.go:282] 1 containers: [bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10]
	I1217 00:40:24.302076  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:24.305755  224114 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:40:24.305817  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:40:24.331042  224114 cri.go:89] found id: ""
	I1217 00:40:24.331063  224114 logs.go:282] 0 containers: []
	W1217 00:40:24.331071  224114 logs.go:284] No container was found matching "etcd"
	I1217 00:40:24.331077  224114 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:40:24.331128  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:40:24.354930  224114 cri.go:89] found id: ""
	I1217 00:40:24.354963  224114 logs.go:282] 0 containers: []
	W1217 00:40:24.354976  224114 logs.go:284] No container was found matching "coredns"
	I1217 00:40:24.354984  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:40:24.355062  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:40:24.379842  224114 cri.go:89] found id: "935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:40:24.379864  224114 cri.go:89] found id: ""
	I1217 00:40:24.379872  224114 logs.go:282] 1 containers: [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a]
	I1217 00:40:24.379927  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:24.383684  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:40:24.383742  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:40:24.409029  224114 cri.go:89] found id: ""
	I1217 00:40:24.409054  224114 logs.go:282] 0 containers: []
	W1217 00:40:24.409062  224114 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:40:24.409067  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:40:24.409135  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:40:24.435201  224114 cri.go:89] found id: "dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:40:24.435222  224114 cri.go:89] found id: "9f29095553aaa04499edb2de3b164c530f807c89ee4e024ffe22939cdfaecbda"
	I1217 00:40:24.435226  224114 cri.go:89] found id: ""
	I1217 00:40:24.435233  224114 logs.go:282] 2 containers: [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829 9f29095553aaa04499edb2de3b164c530f807c89ee4e024ffe22939cdfaecbda]
	I1217 00:40:24.435279  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:24.439175  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:24.442811  224114 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:40:24.442867  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:40:24.471413  224114 cri.go:89] found id: ""
	I1217 00:40:24.471440  224114 logs.go:282] 0 containers: []
	W1217 00:40:24.471450  224114 logs.go:284] No container was found matching "kindnet"
	I1217 00:40:24.471457  224114 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:40:24.471499  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:40:24.498645  224114 cri.go:89] found id: ""
	I1217 00:40:24.498664  224114 logs.go:282] 0 containers: []
	W1217 00:40:24.498672  224114 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:40:24.498685  224114 logs.go:123] Gathering logs for dmesg ...
	I1217 00:40:24.498696  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:40:24.512055  224114 logs.go:123] Gathering logs for kube-controller-manager [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829] ...
	I1217 00:40:24.512077  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:40:24.537771  224114 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:40:24.537798  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:40:24.600359  224114 logs.go:123] Gathering logs for kubelet ...
	I1217 00:40:24.600389  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:40:24.684393  224114 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:40:24.684429  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:40:24.743166  224114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:40:24.743201  224114 logs.go:123] Gathering logs for kube-apiserver [bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10] ...
	I1217 00:40:24.743229  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10"
	I1217 00:40:24.778411  224114 logs.go:123] Gathering logs for kube-scheduler [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a] ...
	I1217 00:40:24.778438  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:40:24.809606  224114 logs.go:123] Gathering logs for kube-controller-manager [9f29095553aaa04499edb2de3b164c530f807c89ee4e024ffe22939cdfaecbda] ...
	I1217 00:40:24.809640  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9f29095553aaa04499edb2de3b164c530f807c89ee4e024ffe22939cdfaecbda"
	I1217 00:40:24.837844  224114 logs.go:123] Gathering logs for container status ...
	I1217 00:40:24.837877  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:40:24.558322  260378 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-742860:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.930598997s)
	I1217 00:40:24.558351  260378 kic.go:203] duration metric: took 4.930726021s to extract preloaded images to volume ...
	W1217 00:40:24.558450  260378 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 00:40:24.558490  260378 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 00:40:24.558540  260378 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 00:40:24.613865  260378 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-742860 --name old-k8s-version-742860 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-742860 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-742860 --network old-k8s-version-742860 --ip 192.168.94.2 --volume old-k8s-version-742860:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 00:40:24.893499  260378 cli_runner.go:164] Run: docker container inspect old-k8s-version-742860 --format={{.State.Running}}
	I1217 00:40:24.910626  260378 cli_runner.go:164] Run: docker container inspect old-k8s-version-742860 --format={{.State.Status}}
	I1217 00:40:24.929823  260378 cli_runner.go:164] Run: docker exec old-k8s-version-742860 stat /var/lib/dpkg/alternatives/iptables
	I1217 00:40:24.977103  260378 oci.go:144] the created container "old-k8s-version-742860" has a running status.
	I1217 00:40:24.977133  260378 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/old-k8s-version-742860/id_rsa...
	I1217 00:40:25.052833  260378 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-12816/.minikube/machines/old-k8s-version-742860/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 00:40:25.076785  260378 cli_runner.go:164] Run: docker container inspect old-k8s-version-742860 --format={{.State.Status}}
	I1217 00:40:25.092655  260378 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 00:40:25.092692  260378 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-742860 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 00:40:25.150120  260378 cli_runner.go:164] Run: docker container inspect old-k8s-version-742860 --format={{.State.Status}}
	I1217 00:40:25.167880  260378 machine.go:94] provisionDockerMachine start ...
	I1217 00:40:25.168008  260378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-742860
	I1217 00:40:25.185726  260378 main.go:143] libmachine: Using SSH client type: native
	I1217 00:40:25.186010  260378 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1217 00:40:25.186029  260378 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:40:25.186692  260378 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42402->127.0.0.1:33058: read: connection reset by peer
	I1217 00:40:28.313851  260378 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-742860
	
	I1217 00:40:28.313879  260378 ubuntu.go:182] provisioning hostname "old-k8s-version-742860"
	I1217 00:40:28.313937  260378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-742860
	I1217 00:40:28.332236  260378 main.go:143] libmachine: Using SSH client type: native
	I1217 00:40:28.332499  260378 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1217 00:40:28.332513  260378 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-742860 && echo "old-k8s-version-742860" | sudo tee /etc/hostname
	I1217 00:40:28.466475  260378 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-742860
	
	I1217 00:40:28.466551  260378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-742860
	I1217 00:40:28.486568  260378 main.go:143] libmachine: Using SSH client type: native
	I1217 00:40:28.486785  260378 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1217 00:40:28.486804  260378 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-742860' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-742860/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-742860' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:40:28.614542  260378 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:40:28.614577  260378 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:40:28.614615  260378 ubuntu.go:190] setting up certificates
	I1217 00:40:28.614637  260378 provision.go:84] configureAuth start
	I1217 00:40:28.614711  260378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-742860
	I1217 00:40:28.633371  260378 provision.go:143] copyHostCerts
	I1217 00:40:28.633446  260378 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:40:28.633459  260378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:40:28.633549  260378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:40:28.633725  260378 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:40:28.633744  260378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:40:28.633807  260378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:40:28.633946  260378 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:40:28.633957  260378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:40:28.634004  260378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:40:28.634121  260378 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-742860 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-742860]
	I1217 00:40:28.713406  260378 provision.go:177] copyRemoteCerts
	I1217 00:40:28.713455  260378 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:40:28.713493  260378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-742860
	I1217 00:40:28.731689  260378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/old-k8s-version-742860/id_rsa Username:docker}
	I1217 00:40:28.824957  260378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 00:40:28.843558  260378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:40:28.862772  260378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1217 00:40:28.881560  260378 provision.go:87] duration metric: took 266.902318ms to configureAuth
	I1217 00:40:28.881591  260378 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:40:28.881785  260378 config.go:182] Loaded profile config "old-k8s-version-742860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 00:40:28.881914  260378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-742860
	I1217 00:40:28.900963  260378 main.go:143] libmachine: Using SSH client type: native
	I1217 00:40:28.901195  260378 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1217 00:40:28.901214  260378 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:40:25.276279  211439 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:40:25.276653  211439 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1217 00:40:25.276708  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:40:25.276752  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:40:25.310232  211439 cri.go:89] found id: "81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:25.310259  211439 cri.go:89] found id: ""
	I1217 00:40:25.310269  211439 logs.go:282] 1 containers: [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232]
	I1217 00:40:25.310315  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:25.313766  211439 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:40:25.313816  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:40:25.345922  211439 cri.go:89] found id: ""
	I1217 00:40:25.345942  211439 logs.go:282] 0 containers: []
	W1217 00:40:25.345949  211439 logs.go:284] No container was found matching "etcd"
	I1217 00:40:25.345954  211439 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:40:25.346018  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:40:25.378069  211439 cri.go:89] found id: ""
	I1217 00:40:25.378092  211439 logs.go:282] 0 containers: []
	W1217 00:40:25.378104  211439 logs.go:284] No container was found matching "coredns"
	I1217 00:40:25.378116  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:40:25.378158  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:40:25.411580  211439 cri.go:89] found id: "4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:25.411597  211439 cri.go:89] found id: ""
	I1217 00:40:25.411604  211439 logs.go:282] 1 containers: [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb]
	I1217 00:40:25.411657  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:25.415271  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:40:25.415331  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:40:25.450979  211439 cri.go:89] found id: ""
	I1217 00:40:25.451025  211439 logs.go:282] 0 containers: []
	W1217 00:40:25.451033  211439 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:40:25.451039  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:40:25.451107  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:40:25.485166  211439 cri.go:89] found id: "a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:25.485191  211439 cri.go:89] found id: ""
	I1217 00:40:25.485201  211439 logs.go:282] 1 containers: [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524]
	I1217 00:40:25.485247  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:25.488929  211439 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:40:25.488984  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:40:25.521725  211439 cri.go:89] found id: ""
	I1217 00:40:25.521748  211439 logs.go:282] 0 containers: []
	W1217 00:40:25.521756  211439 logs.go:284] No container was found matching "kindnet"
	I1217 00:40:25.521761  211439 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:40:25.521810  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:40:25.553933  211439 cri.go:89] found id: ""
	I1217 00:40:25.553960  211439 logs.go:282] 0 containers: []
	W1217 00:40:25.553970  211439 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:40:25.553979  211439 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:40:25.554003  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:40:25.600705  211439 logs.go:123] Gathering logs for container status ...
	I1217 00:40:25.600733  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:40:25.636757  211439 logs.go:123] Gathering logs for kubelet ...
	I1217 00:40:25.636779  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:40:25.731949  211439 logs.go:123] Gathering logs for dmesg ...
	I1217 00:40:25.731977  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:40:25.746732  211439 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:40:25.746760  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:40:25.806954  211439 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:40:25.806980  211439 logs.go:123] Gathering logs for kube-apiserver [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232] ...
	I1217 00:40:25.807009  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:25.845750  211439 logs.go:123] Gathering logs for kube-scheduler [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb] ...
	I1217 00:40:25.845777  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:25.916255  211439 logs.go:123] Gathering logs for kube-controller-manager [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524] ...
	I1217 00:40:25.916286  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:28.451229  211439 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:40:28.451615  211439 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1217 00:40:28.451665  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:40:28.451723  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:40:28.491524  211439 cri.go:89] found id: "81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:28.491543  211439 cri.go:89] found id: ""
	I1217 00:40:28.491550  211439 logs.go:282] 1 containers: [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232]
	I1217 00:40:28.491602  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:28.495582  211439 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:40:28.495644  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:40:28.529100  211439 cri.go:89] found id: ""
	I1217 00:40:28.529134  211439 logs.go:282] 0 containers: []
	W1217 00:40:28.529143  211439 logs.go:284] No container was found matching "etcd"
	I1217 00:40:28.529153  211439 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:40:28.529195  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:40:28.565063  211439 cri.go:89] found id: ""
	I1217 00:40:28.565088  211439 logs.go:282] 0 containers: []
	W1217 00:40:28.565098  211439 logs.go:284] No container was found matching "coredns"
	I1217 00:40:28.565105  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:40:28.565157  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:40:28.597667  211439 cri.go:89] found id: "4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:28.597690  211439 cri.go:89] found id: ""
	I1217 00:40:28.597699  211439 logs.go:282] 1 containers: [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb]
	I1217 00:40:28.597741  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:28.601266  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:40:28.601318  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:40:28.638506  211439 cri.go:89] found id: ""
	I1217 00:40:28.638530  211439 logs.go:282] 0 containers: []
	W1217 00:40:28.638540  211439 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:40:28.638547  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:40:28.638602  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:40:28.671219  211439 cri.go:89] found id: "a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:28.671239  211439 cri.go:89] found id: ""
	I1217 00:40:28.671247  211439 logs.go:282] 1 containers: [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524]
	I1217 00:40:28.671300  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:28.674739  211439 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:40:28.674796  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:40:28.707444  211439 cri.go:89] found id: ""
	I1217 00:40:28.707468  211439 logs.go:282] 0 containers: []
	W1217 00:40:28.707477  211439 logs.go:284] No container was found matching "kindnet"
	I1217 00:40:28.707484  211439 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:40:28.707531  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:40:28.742507  211439 cri.go:89] found id: ""
	I1217 00:40:28.742532  211439 logs.go:282] 0 containers: []
	W1217 00:40:28.742540  211439 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:40:28.742549  211439 logs.go:123] Gathering logs for kube-controller-manager [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524] ...
	I1217 00:40:28.742559  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:28.777791  211439 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:40:28.777830  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:40:28.828555  211439 logs.go:123] Gathering logs for container status ...
	I1217 00:40:28.828588  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:40:28.866165  211439 logs.go:123] Gathering logs for kubelet ...
	I1217 00:40:28.866190  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:40:28.967796  211439 logs.go:123] Gathering logs for dmesg ...
	I1217 00:40:28.967827  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:40:28.983423  211439 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:40:28.983446  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:40:29.041129  211439 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:40:29.041151  211439 logs.go:123] Gathering logs for kube-apiserver [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232] ...
	I1217 00:40:29.041167  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:29.080069  211439 logs.go:123] Gathering logs for kube-scheduler [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb] ...
	I1217 00:40:29.080097  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:29.167987  260378 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:40:29.168028  260378 machine.go:97] duration metric: took 4.000122293s to provisionDockerMachine
	I1217 00:40:29.168038  260378 client.go:176] duration metric: took 10.052637435s to LocalClient.Create
	I1217 00:40:29.168064  260378 start.go:167] duration metric: took 10.052697921s to libmachine.API.Create "old-k8s-version-742860"
	I1217 00:40:29.168076  260378 start.go:293] postStartSetup for "old-k8s-version-742860" (driver="docker")
	I1217 00:40:29.168090  260378 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:40:29.168154  260378 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:40:29.168203  260378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-742860
	I1217 00:40:29.186728  260378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/old-k8s-version-742860/id_rsa Username:docker}
	I1217 00:40:29.280332  260378 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:40:29.283678  260378 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:40:29.283702  260378 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:40:29.283711  260378 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:40:29.283756  260378 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:40:29.283835  260378 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:40:29.283929  260378 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:40:29.291222  260378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:40:29.309958  260378 start.go:296] duration metric: took 141.87019ms for postStartSetup
	I1217 00:40:29.310291  260378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-742860
	I1217 00:40:29.328206  260378 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/config.json ...
	I1217 00:40:29.328488  260378 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:40:29.328538  260378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-742860
	I1217 00:40:29.345584  260378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/old-k8s-version-742860/id_rsa Username:docker}
	I1217 00:40:29.434928  260378 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:40:29.439296  260378 start.go:128] duration metric: took 10.325887516s to createHost
	I1217 00:40:29.439316  260378 start.go:83] releasing machines lock for "old-k8s-version-742860", held for 10.326020506s
	I1217 00:40:29.439381  260378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-742860
	I1217 00:40:29.457771  260378 ssh_runner.go:195] Run: cat /version.json
	I1217 00:40:29.457819  260378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-742860
	I1217 00:40:29.457855  260378 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:40:29.457933  260378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-742860
	I1217 00:40:29.476125  260378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/old-k8s-version-742860/id_rsa Username:docker}
	I1217 00:40:29.477213  260378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/old-k8s-version-742860/id_rsa Username:docker}
	I1217 00:40:29.619606  260378 ssh_runner.go:195] Run: systemctl --version
	I1217 00:40:29.625963  260378 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:40:29.659584  260378 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:40:29.663924  260378 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:40:29.664027  260378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:40:29.688608  260378 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 00:40:29.688624  260378 start.go:496] detecting cgroup driver to use...
	I1217 00:40:29.688649  260378 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:40:29.688692  260378 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:40:29.704169  260378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:40:29.715654  260378 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:40:29.715699  260378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:40:29.731239  260378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:40:29.747942  260378 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:40:29.831903  260378 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:40:29.917365  260378 docker.go:234] disabling docker service ...
	I1217 00:40:29.917425  260378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:40:29.935666  260378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:40:29.947680  260378 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:40:30.030681  260378 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:40:30.110472  260378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:40:30.122651  260378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:40:30.136462  260378 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1217 00:40:30.136521  260378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:40:30.146107  260378 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:40:30.146176  260378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:40:30.154584  260378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:40:30.163225  260378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:40:30.171120  260378 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:40:30.178642  260378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:40:30.186643  260378 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:40:30.199580  260378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:40:30.207834  260378 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:40:30.214668  260378 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:40:30.221487  260378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:40:30.300288  260378 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:40:30.436826  260378 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:40:30.436889  260378 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:40:30.440783  260378 start.go:564] Will wait 60s for crictl version
	I1217 00:40:30.440836  260378 ssh_runner.go:195] Run: which crictl
	I1217 00:40:30.444193  260378 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:40:30.470659  260378 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:40:30.470737  260378 ssh_runner.go:195] Run: crio --version
	I1217 00:40:30.499956  260378 ssh_runner.go:195] Run: crio --version
	I1217 00:40:30.530527  260378 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1217 00:40:27.370918  224114 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 00:40:27.371350  224114 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 00:40:27.371400  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:40:27.371448  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:40:27.398448  224114 cri.go:89] found id: "bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10"
	I1217 00:40:27.398467  224114 cri.go:89] found id: ""
	I1217 00:40:27.398473  224114 logs.go:282] 1 containers: [bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10]
	I1217 00:40:27.398517  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:27.402196  224114 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:40:27.402246  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:40:27.426622  224114 cri.go:89] found id: ""
	I1217 00:40:27.426649  224114 logs.go:282] 0 containers: []
	W1217 00:40:27.426660  224114 logs.go:284] No container was found matching "etcd"
	I1217 00:40:27.426666  224114 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:40:27.426708  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:40:27.450953  224114 cri.go:89] found id: ""
	I1217 00:40:27.450971  224114 logs.go:282] 0 containers: []
	W1217 00:40:27.450979  224114 logs.go:284] No container was found matching "coredns"
	I1217 00:40:27.450983  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:40:27.451044  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:40:27.477443  224114 cri.go:89] found id: "935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:40:27.477463  224114 cri.go:89] found id: ""
	I1217 00:40:27.477472  224114 logs.go:282] 1 containers: [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a]
	I1217 00:40:27.477526  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:27.481281  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:40:27.481338  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:40:27.508882  224114 cri.go:89] found id: ""
	I1217 00:40:27.508901  224114 logs.go:282] 0 containers: []
	W1217 00:40:27.508908  224114 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:40:27.508915  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:40:27.508967  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:40:27.535591  224114 cri.go:89] found id: "dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:40:27.535613  224114 cri.go:89] found id: ""
	I1217 00:40:27.535623  224114 logs.go:282] 1 containers: [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829]
	I1217 00:40:27.535683  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:27.539853  224114 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:40:27.539917  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:40:27.564523  224114 cri.go:89] found id: ""
	I1217 00:40:27.564547  224114 logs.go:282] 0 containers: []
	W1217 00:40:27.564558  224114 logs.go:284] No container was found matching "kindnet"
	I1217 00:40:27.564566  224114 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:40:27.564626  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:40:27.589399  224114 cri.go:89] found id: ""
	I1217 00:40:27.589423  224114 logs.go:282] 0 containers: []
	W1217 00:40:27.589437  224114 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:40:27.589448  224114 logs.go:123] Gathering logs for kube-controller-manager [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829] ...
	I1217 00:40:27.589474  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:40:27.613596  224114 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:40:27.613626  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:40:27.663966  224114 logs.go:123] Gathering logs for container status ...
	I1217 00:40:27.664014  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:40:27.692985  224114 logs.go:123] Gathering logs for kubelet ...
	I1217 00:40:27.693035  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:40:27.781391  224114 logs.go:123] Gathering logs for dmesg ...
	I1217 00:40:27.781429  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:40:27.796231  224114 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:40:27.796257  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:40:27.850741  224114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:40:27.850765  224114 logs.go:123] Gathering logs for kube-apiserver [bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10] ...
	I1217 00:40:27.850779  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10"
	I1217 00:40:27.880273  224114 logs.go:123] Gathering logs for kube-scheduler [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a] ...
	I1217 00:40:27.880298  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:40:30.405372  224114 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 00:40:30.405744  224114 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 00:40:30.405790  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:40:30.405836  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:40:30.430552  224114 cri.go:89] found id: "bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10"
	I1217 00:40:30.430573  224114 cri.go:89] found id: ""
	I1217 00:40:30.430580  224114 logs.go:282] 1 containers: [bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10]
	I1217 00:40:30.430637  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:30.434944  224114 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:40:30.435027  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:40:30.462725  224114 cri.go:89] found id: ""
	I1217 00:40:30.462755  224114 logs.go:282] 0 containers: []
	W1217 00:40:30.462765  224114 logs.go:284] No container was found matching "etcd"
	I1217 00:40:30.462771  224114 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:40:30.462854  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:40:30.491554  224114 cri.go:89] found id: ""
	I1217 00:40:30.491577  224114 logs.go:282] 0 containers: []
	W1217 00:40:30.491585  224114 logs.go:284] No container was found matching "coredns"
	I1217 00:40:30.491590  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:40:30.491641  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:40:30.519525  224114 cri.go:89] found id: "935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:40:30.519550  224114 cri.go:89] found id: ""
	I1217 00:40:30.519559  224114 logs.go:282] 1 containers: [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a]
	I1217 00:40:30.519620  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:30.524112  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:40:30.524166  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:40:30.550771  224114 cri.go:89] found id: ""
	I1217 00:40:30.550798  224114 logs.go:282] 0 containers: []
	W1217 00:40:30.550809  224114 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:40:30.550841  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:40:30.550887  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:40:30.576553  224114 cri.go:89] found id: "dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:40:30.576578  224114 cri.go:89] found id: ""
	I1217 00:40:30.576588  224114 logs.go:282] 1 containers: [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829]
	I1217 00:40:30.576646  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:30.580309  224114 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:40:30.580375  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:40:30.608954  224114 cri.go:89] found id: ""
	I1217 00:40:30.608981  224114 logs.go:282] 0 containers: []
	W1217 00:40:30.609017  224114 logs.go:284] No container was found matching "kindnet"
	I1217 00:40:30.609030  224114 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:40:30.609095  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:40:30.636971  224114 cri.go:89] found id: ""
	I1217 00:40:30.637009  224114 logs.go:282] 0 containers: []
	W1217 00:40:30.637023  224114 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:40:30.637033  224114 logs.go:123] Gathering logs for kubelet ...
	I1217 00:40:30.637047  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:40:30.728099  224114 logs.go:123] Gathering logs for dmesg ...
	I1217 00:40:30.728126  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:40:30.742422  224114 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:40:30.742443  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:40:30.806458  224114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:40:30.806489  224114 logs.go:123] Gathering logs for kube-apiserver [bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10] ...
	I1217 00:40:30.806507  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10"
	I1217 00:40:30.838382  224114 logs.go:123] Gathering logs for kube-scheduler [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a] ...
	I1217 00:40:30.838413  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:40:30.865552  224114 logs.go:123] Gathering logs for kube-controller-manager [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829] ...
	I1217 00:40:30.865578  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:40:30.893720  224114 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:40:30.893742  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:40:30.947448  224114 logs.go:123] Gathering logs for container status ...
	I1217 00:40:30.947477  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:40:30.531552  260378 cli_runner.go:164] Run: docker network inspect old-k8s-version-742860 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:40:30.550936  260378 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1217 00:40:30.554928  260378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:40:30.564792  260378 kubeadm.go:884] updating cluster {Name:old-k8s-version-742860 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-742860 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:40:30.564918  260378 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 00:40:30.564967  260378 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:40:30.597440  260378 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:40:30.597464  260378 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:40:30.597515  260378 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:40:30.623778  260378 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:40:30.623798  260378 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:40:30.623806  260378 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 crio true true} ...
	I1217 00:40:30.623904  260378 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-742860 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-742860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:40:30.623980  260378 ssh_runner.go:195] Run: crio config
	I1217 00:40:30.675900  260378 cni.go:84] Creating CNI manager for ""
	I1217 00:40:30.675923  260378 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:40:30.675945  260378 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:40:30.675966  260378 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-742860 NodeName:old-k8s-version-742860 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:40:30.676109  260378 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-742860"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:40:30.676165  260378 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1217 00:40:30.683723  260378 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:40:30.683792  260378 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:40:30.691107  260378 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 00:40:30.702784  260378 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 00:40:30.716741  260378 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1217 00:40:30.728634  260378 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:40:30.732201  260378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:40:30.742129  260378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:40:30.832576  260378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:40:30.859312  260378 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860 for IP: 192.168.94.2
	I1217 00:40:30.859332  260378 certs.go:195] generating shared ca certs ...
	I1217 00:40:30.859356  260378 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:40:30.859507  260378 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:40:30.859553  260378 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:40:30.859567  260378 certs.go:257] generating profile certs ...
	I1217 00:40:30.859637  260378 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/client.key
	I1217 00:40:30.859661  260378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/client.crt with IP's: []
	I1217 00:40:30.923312  260378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/client.crt ...
	I1217 00:40:30.923337  260378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/client.crt: {Name:mkc2e7987f8dca1dd68e74e83cd0563317a27bcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:40:30.923519  260378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/client.key ...
	I1217 00:40:30.923537  260378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/client.key: {Name:mke2ebb2ed94472b7e42922475620100e35f8332 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:40:30.923660  260378 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/apiserver.key.a4fa2e0f
	I1217 00:40:30.923686  260378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/apiserver.crt.a4fa2e0f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1217 00:40:31.138414  260378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/apiserver.crt.a4fa2e0f ...
	I1217 00:40:31.138438  260378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/apiserver.crt.a4fa2e0f: {Name:mk1f0ac86e8577f4624c499632e740b4f3bf7c47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:40:31.138591  260378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/apiserver.key.a4fa2e0f ...
	I1217 00:40:31.138606  260378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/apiserver.key.a4fa2e0f: {Name:mk5590c3dfc4a953e25725d4a32b11ff5da082ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:40:31.138677  260378 certs.go:382] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/apiserver.crt.a4fa2e0f -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/apiserver.crt
	I1217 00:40:31.138777  260378 certs.go:386] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/apiserver.key.a4fa2e0f -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/apiserver.key
	I1217 00:40:31.138834  260378 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/proxy-client.key
	I1217 00:40:31.138849  260378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/proxy-client.crt with IP's: []
	I1217 00:40:31.171215  260378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/proxy-client.crt ...
	I1217 00:40:31.171235  260378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/proxy-client.crt: {Name:mkf2ca702f545d56b8e4effc00e312b977827d5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:40:31.171367  260378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/proxy-client.key ...
	I1217 00:40:31.171381  260378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/proxy-client.key: {Name:mk0561fbabea3b83aac54eccef136c0dab935432 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:40:31.171542  260378 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:40:31.171585  260378 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:40:31.171594  260378 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:40:31.171617  260378 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:40:31.171641  260378 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:40:31.171663  260378 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:40:31.171703  260378 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:40:31.172257  260378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:40:31.189652  260378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:40:31.206382  260378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:40:31.223590  260378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:40:31.239857  260378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 00:40:31.256385  260378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:40:31.272577  260378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:40:31.288949  260378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:40:31.305016  260378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:40:31.323268  260378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:40:31.339695  260378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:40:31.356147  260378 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:40:31.367773  260378 ssh_runner.go:195] Run: openssl version
	I1217 00:40:31.373618  260378 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:40:31.380487  260378 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:40:31.387634  260378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:40:31.391002  260378 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:40:31.391043  260378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	I1217 00:40:31.424390  260378 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:40:31.431234  260378 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/163542.pem /etc/ssl/certs/3ec20f2e.0
	I1217 00:40:31.438263  260378 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:40:31.445786  260378 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:40:31.452881  260378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:40:31.456380  260378 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:40:31.456430  260378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:40:31.490316  260378 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:40:31.497603  260378 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 00:40:31.504503  260378 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16354.pem
	I1217 00:40:31.511298  260378 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16354.pem /etc/ssl/certs/16354.pem
	I1217 00:40:31.518131  260378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16354.pem
	I1217 00:40:31.521598  260378 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:13 /usr/share/ca-certificates/16354.pem
	I1217 00:40:31.521642  260378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16354.pem
	I1217 00:40:31.555538  260378 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:40:31.562650  260378 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16354.pem /etc/ssl/certs/51391683.0
	I1217 00:40:31.569470  260378 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:40:31.572679  260378 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 00:40:31.572729  260378 kubeadm.go:401] StartCluster: {Name:old-k8s-version-742860 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-742860 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:40:31.572804  260378 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:40:31.572837  260378 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:40:31.598856  260378 cri.go:89] found id: ""
	I1217 00:40:31.598915  260378 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:40:31.606206  260378 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:40:31.613352  260378 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:40:31.613387  260378 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:40:31.620465  260378 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:40:31.620479  260378 kubeadm.go:158] found existing configuration files:
	
	I1217 00:40:31.620518  260378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 00:40:31.627716  260378 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:40:31.627764  260378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:40:31.634437  260378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 00:40:31.641315  260378 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:40:31.641351  260378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:40:31.647931  260378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 00:40:31.654755  260378 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:40:31.654799  260378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:40:31.661801  260378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 00:40:31.668909  260378 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:40:31.668953  260378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:40:31.675788  260378 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:40:31.777938  260378 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 00:40:31.859406  260378 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:40:31.659049  211439 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:40:31.659377  211439 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1217 00:40:31.659420  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:40:31.659462  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:40:31.695385  211439 cri.go:89] found id: "81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:31.695409  211439 cri.go:89] found id: ""
	I1217 00:40:31.695420  211439 logs.go:282] 1 containers: [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232]
	I1217 00:40:31.695474  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:31.699504  211439 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:40:31.699551  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:40:31.732656  211439 cri.go:89] found id: ""
	I1217 00:40:31.732685  211439 logs.go:282] 0 containers: []
	W1217 00:40:31.732696  211439 logs.go:284] No container was found matching "etcd"
	I1217 00:40:31.732703  211439 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:40:31.732752  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:40:31.774413  211439 cri.go:89] found id: ""
	I1217 00:40:31.774440  211439 logs.go:282] 0 containers: []
	W1217 00:40:31.774451  211439 logs.go:284] No container was found matching "coredns"
	I1217 00:40:31.774458  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:40:31.774515  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:40:31.817348  211439 cri.go:89] found id: "4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:31.817373  211439 cri.go:89] found id: ""
	I1217 00:40:31.817382  211439 logs.go:282] 1 containers: [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb]
	I1217 00:40:31.817451  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:31.821427  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:40:31.821493  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:40:31.857685  211439 cri.go:89] found id: ""
	I1217 00:40:31.857711  211439 logs.go:282] 0 containers: []
	W1217 00:40:31.857722  211439 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:40:31.857735  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:40:31.857793  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:40:31.893723  211439 cri.go:89] found id: "a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:31.893748  211439 cri.go:89] found id: ""
	I1217 00:40:31.893759  211439 logs.go:282] 1 containers: [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524]
	I1217 00:40:31.893821  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:31.898128  211439 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:40:31.898189  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:40:31.937525  211439 cri.go:89] found id: ""
	I1217 00:40:31.937555  211439 logs.go:282] 0 containers: []
	W1217 00:40:31.937567  211439 logs.go:284] No container was found matching "kindnet"
	I1217 00:40:31.937575  211439 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:40:31.937642  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:40:31.975392  211439 cri.go:89] found id: ""
	I1217 00:40:31.975418  211439 logs.go:282] 0 containers: []
	W1217 00:40:31.975430  211439 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:40:31.975442  211439 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:40:31.975458  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:40:32.036952  211439 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:40:32.036975  211439 logs.go:123] Gathering logs for kube-apiserver [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232] ...
	I1217 00:40:32.037004  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:32.074375  211439 logs.go:123] Gathering logs for kube-scheduler [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb] ...
	I1217 00:40:32.074399  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:32.155878  211439 logs.go:123] Gathering logs for kube-controller-manager [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524] ...
	I1217 00:40:32.155908  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:32.192897  211439 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:40:32.192921  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:40:32.243771  211439 logs.go:123] Gathering logs for container status ...
	I1217 00:40:32.243803  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:40:32.283363  211439 logs.go:123] Gathering logs for kubelet ...
	I1217 00:40:32.283386  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:40:32.380340  211439 logs.go:123] Gathering logs for dmesg ...
	I1217 00:40:32.380371  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:40:34.898112  211439 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:40:34.898487  211439 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1217 00:40:34.898542  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:40:34.898585  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:40:34.939325  211439 cri.go:89] found id: "81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:34.939346  211439 cri.go:89] found id: ""
	I1217 00:40:34.939353  211439 logs.go:282] 1 containers: [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232]
	I1217 00:40:34.939407  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:34.942936  211439 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:40:34.943018  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:40:34.976905  211439 cri.go:89] found id: ""
	I1217 00:40:34.976931  211439 logs.go:282] 0 containers: []
	W1217 00:40:34.976942  211439 logs.go:284] No container was found matching "etcd"
	I1217 00:40:34.976950  211439 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:40:34.977030  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:40:35.009484  211439 cri.go:89] found id: ""
	I1217 00:40:35.009509  211439 logs.go:282] 0 containers: []
	W1217 00:40:35.009516  211439 logs.go:284] No container was found matching "coredns"
	I1217 00:40:35.009522  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:40:35.009564  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:40:35.043184  211439 cri.go:89] found id: "4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:35.043211  211439 cri.go:89] found id: ""
	I1217 00:40:35.043220  211439 logs.go:282] 1 containers: [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb]
	I1217 00:40:35.043266  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:35.047033  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:40:35.047099  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:40:35.078794  211439 cri.go:89] found id: ""
	I1217 00:40:35.078817  211439 logs.go:282] 0 containers: []
	W1217 00:40:35.078826  211439 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:40:35.078833  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:40:35.078891  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:40:35.110974  211439 cri.go:89] found id: "a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:35.111002  211439 cri.go:89] found id: ""
	I1217 00:40:35.111012  211439 logs.go:282] 1 containers: [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524]
	I1217 00:40:35.111068  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:35.114550  211439 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:40:35.114601  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:40:35.146509  211439 cri.go:89] found id: ""
	I1217 00:40:35.146530  211439 logs.go:282] 0 containers: []
	W1217 00:40:35.146538  211439 logs.go:284] No container was found matching "kindnet"
	I1217 00:40:35.146543  211439 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:40:35.146591  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:40:35.180415  211439 cri.go:89] found id: ""
	I1217 00:40:35.180453  211439 logs.go:282] 0 containers: []
	W1217 00:40:35.180463  211439 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:40:35.180474  211439 logs.go:123] Gathering logs for kube-apiserver [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232] ...
	I1217 00:40:35.180491  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:33.479193  224114 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 00:40:39.725362  260378 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1217 00:40:39.725420  260378 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:40:39.725504  260378 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:40:39.725571  260378 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 00:40:39.725648  260378 kubeadm.go:319] OS: Linux
	I1217 00:40:39.725731  260378 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:40:39.725832  260378 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:40:39.725914  260378 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:40:39.725971  260378 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:40:39.726050  260378 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:40:39.726119  260378 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:40:39.726197  260378 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:40:39.726269  260378 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 00:40:39.726358  260378 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:40:39.726526  260378 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:40:39.726679  260378 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1217 00:40:39.726751  260378 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:40:39.727979  260378 out.go:252]   - Generating certificates and keys ...
	I1217 00:40:39.728106  260378 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:40:39.728181  260378 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:40:39.728287  260378 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 00:40:39.728411  260378 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 00:40:39.728483  260378 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 00:40:39.728526  260378 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 00:40:39.728570  260378 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 00:40:39.728699  260378 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-742860] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 00:40:39.728763  260378 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 00:40:39.728937  260378 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-742860] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 00:40:39.729036  260378 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 00:40:39.729101  260378 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 00:40:39.729167  260378 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 00:40:39.729255  260378 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:40:39.729307  260378 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:40:39.729350  260378 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:40:39.729404  260378 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:40:39.729475  260378 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:40:39.729555  260378 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:40:39.729609  260378 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:40:39.731412  260378 out.go:252]   - Booting up control plane ...
	I1217 00:40:39.731487  260378 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:40:39.731554  260378 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:40:39.731608  260378 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:40:39.731709  260378 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:40:39.731782  260378 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:40:39.731819  260378 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:40:39.731968  260378 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1217 00:40:39.732070  260378 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.002050 seconds
	I1217 00:40:39.732190  260378 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 00:40:39.732369  260378 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 00:40:39.732448  260378 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 00:40:39.732646  260378 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-742860 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 00:40:39.732725  260378 kubeadm.go:319] [bootstrap-token] Using token: hqmof7.wk744yacwrf0amj6
	I1217 00:40:39.733744  260378 out.go:252]   - Configuring RBAC rules ...
	I1217 00:40:39.733861  260378 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 00:40:39.733957  260378 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 00:40:39.734112  260378 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 00:40:39.734290  260378 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 00:40:39.734439  260378 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 00:40:39.734531  260378 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 00:40:39.734659  260378 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 00:40:39.734702  260378 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 00:40:39.734743  260378 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 00:40:39.734749  260378 kubeadm.go:319] 
	I1217 00:40:39.734804  260378 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 00:40:39.734810  260378 kubeadm.go:319] 
	I1217 00:40:39.734878  260378 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 00:40:39.734883  260378 kubeadm.go:319] 
	I1217 00:40:39.734913  260378 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 00:40:39.735017  260378 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 00:40:39.735094  260378 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 00:40:39.735102  260378 kubeadm.go:319] 
	I1217 00:40:39.735172  260378 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 00:40:39.735181  260378 kubeadm.go:319] 
	I1217 00:40:39.735237  260378 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 00:40:39.735246  260378 kubeadm.go:319] 
	I1217 00:40:39.735287  260378 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 00:40:39.735347  260378 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 00:40:39.735404  260378 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 00:40:39.735410  260378 kubeadm.go:319] 
	I1217 00:40:39.735491  260378 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 00:40:39.735591  260378 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 00:40:39.735602  260378 kubeadm.go:319] 
	I1217 00:40:39.735714  260378 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token hqmof7.wk744yacwrf0amj6 \
	I1217 00:40:39.735878  260378 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a7c34974519aee4953e03245da076d7a2eba06e40135880a85806e2dab303fa1 \
	I1217 00:40:39.735904  260378 kubeadm.go:319] 	--control-plane 
	I1217 00:40:39.735912  260378 kubeadm.go:319] 
	I1217 00:40:39.736043  260378 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 00:40:39.736053  260378 kubeadm.go:319] 
	I1217 00:40:39.736157  260378 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hqmof7.wk744yacwrf0amj6 \
	I1217 00:40:39.736309  260378 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a7c34974519aee4953e03245da076d7a2eba06e40135880a85806e2dab303fa1 
	I1217 00:40:39.736323  260378 cni.go:84] Creating CNI manager for ""
	I1217 00:40:39.736329  260378 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:40:39.738163  260378 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 00:40:35.217931  211439 logs.go:123] Gathering logs for kube-scheduler [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb] ...
	I1217 00:40:35.217957  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:35.291144  211439 logs.go:123] Gathering logs for kube-controller-manager [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524] ...
	I1217 00:40:35.291175  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:35.326138  211439 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:40:35.326161  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:40:35.385919  211439 logs.go:123] Gathering logs for container status ...
	I1217 00:40:35.385958  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:40:35.429189  211439 logs.go:123] Gathering logs for kubelet ...
	I1217 00:40:35.429229  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:40:35.544692  211439 logs.go:123] Gathering logs for dmesg ...
	I1217 00:40:35.544736  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:40:35.561515  211439 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:40:35.561544  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:40:35.621733  211439 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:40:38.122380  211439 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:40:38.122793  211439 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1217 00:40:38.122848  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:40:38.122907  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:40:38.163484  211439 cri.go:89] found id: "81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:38.163506  211439 cri.go:89] found id: ""
	I1217 00:40:38.163515  211439 logs.go:282] 1 containers: [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232]
	I1217 00:40:38.163570  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:38.168125  211439 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:40:38.168200  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:40:38.207552  211439 cri.go:89] found id: ""
	I1217 00:40:38.207583  211439 logs.go:282] 0 containers: []
	W1217 00:40:38.207593  211439 logs.go:284] No container was found matching "etcd"
	I1217 00:40:38.207603  211439 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:40:38.207665  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:40:38.244515  211439 cri.go:89] found id: ""
	I1217 00:40:38.244537  211439 logs.go:282] 0 containers: []
	W1217 00:40:38.244548  211439 logs.go:284] No container was found matching "coredns"
	I1217 00:40:38.244555  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:40:38.244607  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:40:38.280064  211439 cri.go:89] found id: "4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:38.280086  211439 cri.go:89] found id: ""
	I1217 00:40:38.280094  211439 logs.go:282] 1 containers: [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb]
	I1217 00:40:38.280156  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:38.284252  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:40:38.284318  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:40:38.320174  211439 cri.go:89] found id: ""
	I1217 00:40:38.320198  211439 logs.go:282] 0 containers: []
	W1217 00:40:38.320208  211439 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:40:38.320215  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:40:38.320277  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:40:38.354172  211439 cri.go:89] found id: "a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:38.354190  211439 cri.go:89] found id: ""
	I1217 00:40:38.354197  211439 logs.go:282] 1 containers: [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524]
	I1217 00:40:38.354241  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:38.358312  211439 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:40:38.358371  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:40:38.391917  211439 cri.go:89] found id: ""
	I1217 00:40:38.391947  211439 logs.go:282] 0 containers: []
	W1217 00:40:38.391957  211439 logs.go:284] No container was found matching "kindnet"
	I1217 00:40:38.391964  211439 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:40:38.392031  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:40:38.425384  211439 cri.go:89] found id: ""
	I1217 00:40:38.425408  211439 logs.go:282] 0 containers: []
	W1217 00:40:38.425418  211439 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:40:38.425427  211439 logs.go:123] Gathering logs for container status ...
	I1217 00:40:38.425439  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:40:38.462536  211439 logs.go:123] Gathering logs for kubelet ...
	I1217 00:40:38.462573  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:40:38.569071  211439 logs.go:123] Gathering logs for dmesg ...
	I1217 00:40:38.569104  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:40:38.585223  211439 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:40:38.585246  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:40:38.649440  211439 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:40:38.649463  211439 logs.go:123] Gathering logs for kube-apiserver [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232] ...
	I1217 00:40:38.649479  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:38.690489  211439 logs.go:123] Gathering logs for kube-scheduler [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb] ...
	I1217 00:40:38.690518  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:38.766978  211439 logs.go:123] Gathering logs for kube-controller-manager [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524] ...
	I1217 00:40:38.767015  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:38.804211  211439 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:40:38.804241  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:40:38.479789  224114 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 00:40:38.479847  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:40:38.479906  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:40:38.509381  224114 cri.go:89] found id: "a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22"
	I1217 00:40:38.509406  224114 cri.go:89] found id: "bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10"
	I1217 00:40:38.509414  224114 cri.go:89] found id: ""
	I1217 00:40:38.509424  224114 logs.go:282] 2 containers: [a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22 bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10]
	I1217 00:40:38.509486  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:38.513477  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:38.517019  224114 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:40:38.517078  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:40:38.542619  224114 cri.go:89] found id: ""
	I1217 00:40:38.542641  224114 logs.go:282] 0 containers: []
	W1217 00:40:38.542650  224114 logs.go:284] No container was found matching "etcd"
	I1217 00:40:38.542658  224114 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:40:38.542707  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:40:38.570542  224114 cri.go:89] found id: ""
	I1217 00:40:38.570562  224114 logs.go:282] 0 containers: []
	W1217 00:40:38.570572  224114 logs.go:284] No container was found matching "coredns"
	I1217 00:40:38.570580  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:40:38.570637  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:40:38.598267  224114 cri.go:89] found id: "935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:40:38.598289  224114 cri.go:89] found id: ""
	I1217 00:40:38.598298  224114 logs.go:282] 1 containers: [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a]
	I1217 00:40:38.598362  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:38.602229  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:40:38.602289  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:40:38.630094  224114 cri.go:89] found id: ""
	I1217 00:40:38.630119  224114 logs.go:282] 0 containers: []
	W1217 00:40:38.630130  224114 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:40:38.630137  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:40:38.630194  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:40:38.658388  224114 cri.go:89] found id: "dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:40:38.658414  224114 cri.go:89] found id: ""
	I1217 00:40:38.658424  224114 logs.go:282] 1 containers: [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829]
	I1217 00:40:38.658486  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:38.662444  224114 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:40:38.662510  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:40:38.690609  224114 cri.go:89] found id: ""
	I1217 00:40:38.690634  224114 logs.go:282] 0 containers: []
	W1217 00:40:38.690645  224114 logs.go:284] No container was found matching "kindnet"
	I1217 00:40:38.690653  224114 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:40:38.690708  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:40:38.718131  224114 cri.go:89] found id: ""
	I1217 00:40:38.718159  224114 logs.go:282] 0 containers: []
	W1217 00:40:38.718170  224114 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:40:38.718189  224114 logs.go:123] Gathering logs for kubelet ...
	I1217 00:40:38.718204  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:40:38.804928  224114 logs.go:123] Gathering logs for dmesg ...
	I1217 00:40:38.804955  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:40:38.819756  224114 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:40:38.819784  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1217 00:40:39.739200  260378 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 00:40:39.743450  260378 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1217 00:40:39.743465  260378 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1217 00:40:39.756610  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 00:40:40.368131  260378 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 00:40:40.368226  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:40.368249  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-742860 minikube.k8s.io/updated_at=2025_12_17T00_40_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1 minikube.k8s.io/name=old-k8s-version-742860 minikube.k8s.io/primary=true
	I1217 00:40:40.377344  260378 ops.go:34] apiserver oom_adj: -16
	I1217 00:40:40.445431  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:40.946612  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:41.445849  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:41.946210  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:42.445674  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:42.946181  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:43.446097  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:41.359919  211439 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:40:41.360347  211439 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1217 00:40:41.360393  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:40:41.360440  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:40:41.394448  211439 cri.go:89] found id: "81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:41.394469  211439 cri.go:89] found id: ""
	I1217 00:40:41.394476  211439 logs.go:282] 1 containers: [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232]
	I1217 00:40:41.394525  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:41.398494  211439 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:40:41.398544  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:40:41.432764  211439 cri.go:89] found id: ""
	I1217 00:40:41.432786  211439 logs.go:282] 0 containers: []
	W1217 00:40:41.432796  211439 logs.go:284] No container was found matching "etcd"
	I1217 00:40:41.432802  211439 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:40:41.432863  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:40:41.471259  211439 cri.go:89] found id: ""
	I1217 00:40:41.471282  211439 logs.go:282] 0 containers: []
	W1217 00:40:41.471294  211439 logs.go:284] No container was found matching "coredns"
	I1217 00:40:41.471302  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:40:41.471354  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:40:41.507285  211439 cri.go:89] found id: "4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:41.507311  211439 cri.go:89] found id: ""
	I1217 00:40:41.507322  211439 logs.go:282] 1 containers: [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb]
	I1217 00:40:41.507381  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:41.511482  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:40:41.511549  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:40:41.546042  211439 cri.go:89] found id: ""
	I1217 00:40:41.546070  211439 logs.go:282] 0 containers: []
	W1217 00:40:41.546078  211439 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:40:41.546084  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:40:41.546137  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:40:41.578679  211439 cri.go:89] found id: "a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:41.578700  211439 cri.go:89] found id: ""
	I1217 00:40:41.578711  211439 logs.go:282] 1 containers: [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524]
	I1217 00:40:41.578758  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:41.582384  211439 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:40:41.582436  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:40:41.615251  211439 cri.go:89] found id: ""
	I1217 00:40:41.615280  211439 logs.go:282] 0 containers: []
	W1217 00:40:41.615292  211439 logs.go:284] No container was found matching "kindnet"
	I1217 00:40:41.615299  211439 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:40:41.615358  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:40:41.649080  211439 cri.go:89] found id: ""
	I1217 00:40:41.649102  211439 logs.go:282] 0 containers: []
	W1217 00:40:41.649112  211439 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:40:41.649121  211439 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:40:41.649135  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:40:41.706263  211439 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:40:41.706284  211439 logs.go:123] Gathering logs for kube-apiserver [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232] ...
	I1217 00:40:41.706298  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:41.743133  211439 logs.go:123] Gathering logs for kube-scheduler [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb] ...
	I1217 00:40:41.743159  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:41.817813  211439 logs.go:123] Gathering logs for kube-controller-manager [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524] ...
	I1217 00:40:41.817843  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:41.854323  211439 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:40:41.854345  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:40:41.903235  211439 logs.go:123] Gathering logs for container status ...
	I1217 00:40:41.903264  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:40:41.941003  211439 logs.go:123] Gathering logs for kubelet ...
	I1217 00:40:41.941031  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:40:42.062694  211439 logs.go:123] Gathering logs for dmesg ...
	I1217 00:40:42.062723  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:40:44.579512  211439 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:40:44.579910  211439 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1217 00:40:44.579958  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:40:44.580038  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:40:44.614267  211439 cri.go:89] found id: "81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:44.614287  211439 cri.go:89] found id: ""
	I1217 00:40:44.614296  211439 logs.go:282] 1 containers: [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232]
	I1217 00:40:44.614353  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:44.617974  211439 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:40:44.618061  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:40:44.651015  211439 cri.go:89] found id: ""
	I1217 00:40:44.651040  211439 logs.go:282] 0 containers: []
	W1217 00:40:44.651048  211439 logs.go:284] No container was found matching "etcd"
	I1217 00:40:44.651054  211439 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:40:44.651109  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:40:44.683910  211439 cri.go:89] found id: ""
	I1217 00:40:44.683938  211439 logs.go:282] 0 containers: []
	W1217 00:40:44.683948  211439 logs.go:284] No container was found matching "coredns"
	I1217 00:40:44.683957  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:40:44.684033  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:40:44.716288  211439 cri.go:89] found id: "4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:44.716305  211439 cri.go:89] found id: ""
	I1217 00:40:44.716314  211439 logs.go:282] 1 containers: [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb]
	I1217 00:40:44.716370  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:44.720144  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:40:44.720213  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:40:44.753423  211439 cri.go:89] found id: ""
	I1217 00:40:44.753445  211439 logs.go:282] 0 containers: []
	W1217 00:40:44.753455  211439 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:40:44.753463  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:40:44.753517  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:40:44.787451  211439 cri.go:89] found id: "a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:44.787475  211439 cri.go:89] found id: ""
	I1217 00:40:44.787485  211439 logs.go:282] 1 containers: [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524]
	I1217 00:40:44.787542  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:44.791430  211439 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:40:44.791497  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:40:44.825839  211439 cri.go:89] found id: ""
	I1217 00:40:44.825859  211439 logs.go:282] 0 containers: []
	W1217 00:40:44.825875  211439 logs.go:284] No container was found matching "kindnet"
	I1217 00:40:44.825880  211439 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:40:44.825925  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:40:44.859652  211439 cri.go:89] found id: ""
	I1217 00:40:44.859673  211439 logs.go:282] 0 containers: []
	W1217 00:40:44.859680  211439 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:40:44.859689  211439 logs.go:123] Gathering logs for kube-scheduler [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb] ...
	I1217 00:40:44.859700  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:44.930775  211439 logs.go:123] Gathering logs for kube-controller-manager [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524] ...
	I1217 00:40:44.930801  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:44.965443  211439 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:40:44.965466  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:40:45.021028  211439 logs.go:123] Gathering logs for container status ...
	I1217 00:40:45.021054  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:40:45.057647  211439 logs.go:123] Gathering logs for kubelet ...
	I1217 00:40:45.057671  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:40:45.158271  211439 logs.go:123] Gathering logs for dmesg ...
	I1217 00:40:45.158297  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:40:45.174428  211439 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:40:45.174451  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1217 00:40:43.946112  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:44.445957  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:44.945731  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:45.445950  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:45.946194  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:46.446274  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:46.946410  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:47.446340  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:47.946160  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:48.446058  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1217 00:40:45.230148  211439 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:40:45.230176  211439 logs.go:123] Gathering logs for kube-apiserver [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232] ...
	I1217 00:40:45.230187  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:47.767607  211439 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:40:47.768033  211439 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1217 00:40:47.768079  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:40:47.768130  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:40:47.802343  211439 cri.go:89] found id: "81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:47.802364  211439 cri.go:89] found id: ""
	I1217 00:40:47.802371  211439 logs.go:282] 1 containers: [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232]
	I1217 00:40:47.802425  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:47.806612  211439 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:40:47.806669  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:40:47.841174  211439 cri.go:89] found id: ""
	I1217 00:40:47.841201  211439 logs.go:282] 0 containers: []
	W1217 00:40:47.841212  211439 logs.go:284] No container was found matching "etcd"
	I1217 00:40:47.841220  211439 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:40:47.841276  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:40:47.874121  211439 cri.go:89] found id: ""
	I1217 00:40:47.874141  211439 logs.go:282] 0 containers: []
	W1217 00:40:47.874149  211439 logs.go:284] No container was found matching "coredns"
	I1217 00:40:47.874154  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:40:47.874199  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:40:47.906786  211439 cri.go:89] found id: "4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:47.906809  211439 cri.go:89] found id: ""
	I1217 00:40:47.906819  211439 logs.go:282] 1 containers: [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb]
	I1217 00:40:47.906866  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:47.910301  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:40:47.910361  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:40:47.943232  211439 cri.go:89] found id: ""
	I1217 00:40:47.943255  211439 logs.go:282] 0 containers: []
	W1217 00:40:47.943264  211439 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:40:47.943270  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:40:47.943320  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:40:47.979381  211439 cri.go:89] found id: "a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:47.979401  211439 cri.go:89] found id: ""
	I1217 00:40:47.979412  211439 logs.go:282] 1 containers: [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524]
	I1217 00:40:47.979473  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:47.983461  211439 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:40:47.983527  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:40:48.019083  211439 cri.go:89] found id: ""
	I1217 00:40:48.019104  211439 logs.go:282] 0 containers: []
	W1217 00:40:48.019112  211439 logs.go:284] No container was found matching "kindnet"
	I1217 00:40:48.019118  211439 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:40:48.019162  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:40:48.053497  211439 cri.go:89] found id: ""
	I1217 00:40:48.053517  211439 logs.go:282] 0 containers: []
	W1217 00:40:48.053524  211439 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:40:48.053532  211439 logs.go:123] Gathering logs for kubelet ...
	I1217 00:40:48.053542  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:40:48.154650  211439 logs.go:123] Gathering logs for dmesg ...
	I1217 00:40:48.154681  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:40:48.171488  211439 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:40:48.171516  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:40:48.229160  211439 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:40:48.229185  211439 logs.go:123] Gathering logs for kube-apiserver [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232] ...
	I1217 00:40:48.229200  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:48.265905  211439 logs.go:123] Gathering logs for kube-scheduler [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb] ...
	I1217 00:40:48.265929  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:48.339403  211439 logs.go:123] Gathering logs for kube-controller-manager [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524] ...
	I1217 00:40:48.339435  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:48.374787  211439 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:40:48.374827  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:40:48.425835  211439 logs.go:123] Gathering logs for container status ...
	I1217 00:40:48.425867  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:40:48.880664  224114 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.060858429s)
	W1217 00:40:48.880699  224114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1217 00:40:48.880709  224114 logs.go:123] Gathering logs for kube-scheduler [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a] ...
	I1217 00:40:48.880729  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:40:48.911218  224114 logs.go:123] Gathering logs for kube-controller-manager [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829] ...
	I1217 00:40:48.911248  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:40:48.942663  224114 logs.go:123] Gathering logs for container status ...
	I1217 00:40:48.942697  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:40:48.980087  224114 logs.go:123] Gathering logs for kube-apiserver [a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22] ...
	I1217 00:40:48.980124  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22"
	I1217 00:40:49.014206  224114 logs.go:123] Gathering logs for kube-apiserver [bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10] ...
	I1217 00:40:49.014238  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10"
	I1217 00:40:49.045672  224114 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:40:49.045700  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:40:51.604386  224114 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 00:40:48.945887  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:49.445613  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:49.945870  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:50.446330  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:50.946466  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:51.446523  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:51.945891  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:52.445846  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:52.945643  260378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:40:53.022651  260378 kubeadm.go:1114] duration metric: took 12.654483727s to wait for elevateKubeSystemPrivileges
	I1217 00:40:53.022685  260378 kubeadm.go:403] duration metric: took 21.449958111s to StartCluster
	I1217 00:40:53.022704  260378 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:40:53.022776  260378 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:40:53.024188  260378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:40:53.024432  260378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 00:40:53.024440  260378 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:40:53.024477  260378 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:40:53.024572  260378 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-742860"
	I1217 00:40:53.024589  260378 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-742860"
	I1217 00:40:53.024597  260378 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-742860"
	I1217 00:40:53.024627  260378 config.go:182] Loaded profile config "old-k8s-version-742860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 00:40:53.024630  260378 host.go:66] Checking if "old-k8s-version-742860" exists ...
	I1217 00:40:53.024629  260378 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-742860"
	I1217 00:40:53.025129  260378 cli_runner.go:164] Run: docker container inspect old-k8s-version-742860 --format={{.State.Status}}
	I1217 00:40:53.025287  260378 cli_runner.go:164] Run: docker container inspect old-k8s-version-742860 --format={{.State.Status}}
	I1217 00:40:53.029185  260378 out.go:179] * Verifying Kubernetes components...
	I1217 00:40:53.030452  260378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:40:53.049184  260378 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:40:53.050223  260378 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:40:53.050244  260378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:40:53.050305  260378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-742860
	I1217 00:40:53.050610  260378 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-742860"
	I1217 00:40:53.050654  260378 host.go:66] Checking if "old-k8s-version-742860" exists ...
	I1217 00:40:53.051127  260378 cli_runner.go:164] Run: docker container inspect old-k8s-version-742860 --format={{.State.Status}}
	I1217 00:40:53.077822  260378 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:40:53.077906  260378 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:40:53.078048  260378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-742860
	I1217 00:40:53.086234  260378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/old-k8s-version-742860/id_rsa Username:docker}
	I1217 00:40:53.105158  260378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/old-k8s-version-742860/id_rsa Username:docker}
	I1217 00:40:53.129641  260378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 00:40:53.172261  260378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:40:53.204717  260378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:40:53.227040  260378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:40:53.427873  260378 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1217 00:40:53.429865  260378 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-742860" to be "Ready" ...
	I1217 00:40:53.658149  260378 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 00:40:53.659019  260378 addons.go:530] duration metric: took 634.541432ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 00:40:50.967609  211439 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:40:50.968074  211439 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1217 00:40:50.968125  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:40:50.968169  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:40:51.007792  211439 cri.go:89] found id: "81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:51.007814  211439 cri.go:89] found id: ""
	I1217 00:40:51.007829  211439 logs.go:282] 1 containers: [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232]
	I1217 00:40:51.007886  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:51.012591  211439 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:40:51.012659  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:40:51.046981  211439 cri.go:89] found id: ""
	I1217 00:40:51.047031  211439 logs.go:282] 0 containers: []
	W1217 00:40:51.047039  211439 logs.go:284] No container was found matching "etcd"
	I1217 00:40:51.047045  211439 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:40:51.047095  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:40:51.080974  211439 cri.go:89] found id: ""
	I1217 00:40:51.081010  211439 logs.go:282] 0 containers: []
	W1217 00:40:51.081022  211439 logs.go:284] No container was found matching "coredns"
	I1217 00:40:51.081029  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:40:51.081091  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:40:51.114350  211439 cri.go:89] found id: "4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:51.114373  211439 cri.go:89] found id: ""
	I1217 00:40:51.114382  211439 logs.go:282] 1 containers: [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb]
	I1217 00:40:51.114436  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:51.118185  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:40:51.118250  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:40:51.152061  211439 cri.go:89] found id: ""
	I1217 00:40:51.152087  211439 logs.go:282] 0 containers: []
	W1217 00:40:51.152098  211439 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:40:51.152105  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:40:51.152161  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:40:51.185408  211439 cri.go:89] found id: "a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:51.185430  211439 cri.go:89] found id: ""
	I1217 00:40:51.185440  211439 logs.go:282] 1 containers: [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524]
	I1217 00:40:51.185487  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:51.189068  211439 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:40:51.189131  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:40:51.222971  211439 cri.go:89] found id: ""
	I1217 00:40:51.223011  211439 logs.go:282] 0 containers: []
	W1217 00:40:51.223025  211439 logs.go:284] No container was found matching "kindnet"
	I1217 00:40:51.223032  211439 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:40:51.223088  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:40:51.256925  211439 cri.go:89] found id: ""
	I1217 00:40:51.256951  211439 logs.go:282] 0 containers: []
	W1217 00:40:51.256966  211439 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:40:51.256976  211439 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:40:51.256987  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:40:51.316284  211439 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:40:51.316311  211439 logs.go:123] Gathering logs for kube-apiserver [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232] ...
	I1217 00:40:51.316328  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:51.354275  211439 logs.go:123] Gathering logs for kube-scheduler [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb] ...
	I1217 00:40:51.354299  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:51.429149  211439 logs.go:123] Gathering logs for kube-controller-manager [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524] ...
	I1217 00:40:51.429177  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:51.465090  211439 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:40:51.465119  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:40:51.520639  211439 logs.go:123] Gathering logs for container status ...
	I1217 00:40:51.520677  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:40:51.556921  211439 logs.go:123] Gathering logs for kubelet ...
	I1217 00:40:51.556947  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:40:51.652745  211439 logs.go:123] Gathering logs for dmesg ...
	I1217 00:40:51.652779  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:40:54.169848  211439 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:40:54.170230  211439 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1217 00:40:54.170291  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:40:54.170341  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:40:54.205621  211439 cri.go:89] found id: "81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:54.205645  211439 cri.go:89] found id: ""
	I1217 00:40:54.205654  211439 logs.go:282] 1 containers: [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232]
	I1217 00:40:54.205714  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:54.209653  211439 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:40:54.209718  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:40:54.244396  211439 cri.go:89] found id: ""
	I1217 00:40:54.244418  211439 logs.go:282] 0 containers: []
	W1217 00:40:54.244428  211439 logs.go:284] No container was found matching "etcd"
	I1217 00:40:54.244435  211439 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:40:54.244489  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:40:54.281240  211439 cri.go:89] found id: ""
	I1217 00:40:54.281268  211439 logs.go:282] 0 containers: []
	W1217 00:40:54.281280  211439 logs.go:284] No container was found matching "coredns"
	I1217 00:40:54.281287  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:40:54.281338  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:40:54.321707  211439 cri.go:89] found id: "4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:54.321731  211439 cri.go:89] found id: ""
	I1217 00:40:54.321740  211439 logs.go:282] 1 containers: [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb]
	I1217 00:40:54.321800  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:54.326632  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:40:54.326694  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:40:54.368025  211439 cri.go:89] found id: ""
	I1217 00:40:54.368052  211439 logs.go:282] 0 containers: []
	W1217 00:40:54.368062  211439 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:40:54.368068  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:40:54.368127  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:40:54.415333  211439 cri.go:89] found id: "a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:54.415356  211439 cri.go:89] found id: ""
	I1217 00:40:54.415365  211439 logs.go:282] 1 containers: [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524]
	I1217 00:40:54.415421  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:54.420189  211439 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:40:54.420253  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:40:54.465686  211439 cri.go:89] found id: ""
	I1217 00:40:54.466021  211439 logs.go:282] 0 containers: []
	W1217 00:40:54.466037  211439 logs.go:284] No container was found matching "kindnet"
	I1217 00:40:54.466046  211439 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:40:54.466105  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:40:54.513494  211439 cri.go:89] found id: ""
	I1217 00:40:54.513515  211439 logs.go:282] 0 containers: []
	W1217 00:40:54.513523  211439 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:40:54.513531  211439 logs.go:123] Gathering logs for kubelet ...
	I1217 00:40:54.513543  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:40:54.666104  211439 logs.go:123] Gathering logs for dmesg ...
	I1217 00:40:54.666143  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:40:54.686114  211439 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:40:54.686143  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:40:54.767516  211439 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:40:54.767544  211439 logs.go:123] Gathering logs for kube-apiserver [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232] ...
	I1217 00:40:54.767563  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:54.820098  211439 logs.go:123] Gathering logs for kube-scheduler [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb] ...
	I1217 00:40:54.820125  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:54.917019  211439 logs.go:123] Gathering logs for kube-controller-manager [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524] ...
	I1217 00:40:54.917048  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:54.955225  211439 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:40:54.955252  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:40:55.025177  211439 logs.go:123] Gathering logs for container status ...
	I1217 00:40:55.025212  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:40:52.984170  224114 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:54292->192.168.76.2:8443: read: connection reset by peer
	I1217 00:40:52.984236  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:40:52.984296  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:40:53.013821  224114 cri.go:89] found id: "a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22"
	I1217 00:40:53.013845  224114 cri.go:89] found id: "bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10"
	I1217 00:40:53.013851  224114 cri.go:89] found id: ""
	I1217 00:40:53.013860  224114 logs.go:282] 2 containers: [a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22 bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10]
	I1217 00:40:53.013923  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:53.017893  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:53.022086  224114 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:40:53.022142  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:40:53.056233  224114 cri.go:89] found id: ""
	I1217 00:40:53.056257  224114 logs.go:282] 0 containers: []
	W1217 00:40:53.056267  224114 logs.go:284] No container was found matching "etcd"
	I1217 00:40:53.056275  224114 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:40:53.056331  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:40:53.105505  224114 cri.go:89] found id: ""
	I1217 00:40:53.105528  224114 logs.go:282] 0 containers: []
	W1217 00:40:53.105538  224114 logs.go:284] No container was found matching "coredns"
	I1217 00:40:53.105545  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:40:53.105596  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:40:53.138852  224114 cri.go:89] found id: "935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:40:53.138877  224114 cri.go:89] found id: ""
	I1217 00:40:53.138887  224114 logs.go:282] 1 containers: [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a]
	I1217 00:40:53.138942  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:53.144315  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:40:53.144418  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:40:53.178055  224114 cri.go:89] found id: ""
	I1217 00:40:53.178078  224114 logs.go:282] 0 containers: []
	W1217 00:40:53.178090  224114 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:40:53.178097  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:40:53.178158  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:40:53.220923  224114 cri.go:89] found id: "dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:40:53.220959  224114 cri.go:89] found id: ""
	I1217 00:40:53.220970  224114 logs.go:282] 1 containers: [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829]
	I1217 00:40:53.221049  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:53.226376  224114 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:40:53.226448  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:40:53.260601  224114 cri.go:89] found id: ""
	I1217 00:40:53.260640  224114 logs.go:282] 0 containers: []
	W1217 00:40:53.260653  224114 logs.go:284] No container was found matching "kindnet"
	I1217 00:40:53.260660  224114 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:40:53.260749  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:40:53.298048  224114 cri.go:89] found id: ""
	I1217 00:40:53.298078  224114 logs.go:282] 0 containers: []
	W1217 00:40:53.298089  224114 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:40:53.298105  224114 logs.go:123] Gathering logs for dmesg ...
	I1217 00:40:53.298140  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:40:53.319491  224114 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:40:53.319524  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:40:53.401803  224114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:40:53.401827  224114 logs.go:123] Gathering logs for kube-apiserver [bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10] ...
	I1217 00:40:53.401842  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10"
	W1217 00:40:53.442336  224114 logs.go:130] failed kube-apiserver [bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:40:53.439322    5513 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10\": container with ID starting with bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10 not found: ID does not exist" containerID="bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10"
	time="2025-12-17T00:40:53Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10\": container with ID starting with bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1217 00:40:53.439322    5513 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10\": container with ID starting with bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10 not found: ID does not exist" containerID="bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10"
	time="2025-12-17T00:40:53Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10\": container with ID starting with bbbd1a227c5cedb8d48bd4671e505f8c3b234a95c40a60f240637f17b7839b10 not found: ID does not exist"
	
	** /stderr **
	I1217 00:40:53.442359  224114 logs.go:123] Gathering logs for kube-scheduler [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a] ...
	I1217 00:40:53.442374  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:40:53.479840  224114 logs.go:123] Gathering logs for kube-controller-manager [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829] ...
	I1217 00:40:53.479876  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:40:53.521481  224114 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:40:53.521510  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:40:53.605394  224114 logs.go:123] Gathering logs for kubelet ...
	I1217 00:40:53.605430  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:40:53.705255  224114 logs.go:123] Gathering logs for kube-apiserver [a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22] ...
	I1217 00:40:53.705283  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22"
	I1217 00:40:53.736461  224114 logs.go:123] Gathering logs for container status ...
	I1217 00:40:53.736489  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:40:56.269701  224114 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 00:40:56.270128  224114 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 00:40:56.270184  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:40:56.270241  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:40:56.297086  224114 cri.go:89] found id: "a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22"
	I1217 00:40:56.297110  224114 cri.go:89] found id: ""
	I1217 00:40:56.297129  224114 logs.go:282] 1 containers: [a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22]
	I1217 00:40:56.297185  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:56.301220  224114 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:40:56.301281  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:40:56.327025  224114 cri.go:89] found id: ""
	I1217 00:40:56.327050  224114 logs.go:282] 0 containers: []
	W1217 00:40:56.327061  224114 logs.go:284] No container was found matching "etcd"
	I1217 00:40:56.327067  224114 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:40:56.327129  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:40:56.352793  224114 cri.go:89] found id: ""
	I1217 00:40:56.352821  224114 logs.go:282] 0 containers: []
	W1217 00:40:56.352833  224114 logs.go:284] No container was found matching "coredns"
	I1217 00:40:56.352840  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:40:56.352888  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:40:56.379208  224114 cri.go:89] found id: "935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:40:56.379232  224114 cri.go:89] found id: ""
	I1217 00:40:56.379241  224114 logs.go:282] 1 containers: [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a]
	I1217 00:40:56.379314  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:56.383443  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:40:56.383504  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:40:56.408675  224114 cri.go:89] found id: ""
	I1217 00:40:56.408699  224114 logs.go:282] 0 containers: []
	W1217 00:40:56.408707  224114 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:40:56.408712  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:40:56.408760  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:40:56.434659  224114 cri.go:89] found id: "dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:40:56.434680  224114 cri.go:89] found id: ""
	I1217 00:40:56.434689  224114 logs.go:282] 1 containers: [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829]
	I1217 00:40:56.434749  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:56.438500  224114 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:40:56.438566  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:40:56.464253  224114 cri.go:89] found id: ""
	I1217 00:40:56.464275  224114 logs.go:282] 0 containers: []
	W1217 00:40:56.464285  224114 logs.go:284] No container was found matching "kindnet"
	I1217 00:40:56.464292  224114 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:40:56.464349  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:40:56.489480  224114 cri.go:89] found id: ""
	I1217 00:40:56.489507  224114 logs.go:282] 0 containers: []
	W1217 00:40:56.489515  224114 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:40:56.489523  224114 logs.go:123] Gathering logs for kube-scheduler [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a] ...
	I1217 00:40:56.489537  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:40:56.515404  224114 logs.go:123] Gathering logs for kube-controller-manager [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829] ...
	I1217 00:40:56.515429  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:40:56.541112  224114 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:40:56.541137  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:40:56.595693  224114 logs.go:123] Gathering logs for container status ...
	I1217 00:40:56.595718  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:40:56.624483  224114 logs.go:123] Gathering logs for kubelet ...
	I1217 00:40:56.624507  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:40:56.710702  224114 logs.go:123] Gathering logs for dmesg ...
	I1217 00:40:56.710728  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:40:56.725360  224114 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:40:56.725384  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:40:56.779389  224114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:40:56.779411  224114 logs.go:123] Gathering logs for kube-apiserver [a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22] ...
	I1217 00:40:56.779423  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22"
	I1217 00:40:53.932706  260378 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-742860" context rescaled to 1 replicas
	W1217 00:40:55.434496  260378 node_ready.go:57] node "old-k8s-version-742860" has "Ready":"False" status (will retry)
	W1217 00:40:57.932752  260378 node_ready.go:57] node "old-k8s-version-742860" has "Ready":"False" status (will retry)
	I1217 00:40:57.574069  211439 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:40:57.574404  211439 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1217 00:40:57.574445  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:40:57.574488  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:40:57.609557  211439 cri.go:89] found id: "81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:57.609580  211439 cri.go:89] found id: ""
	I1217 00:40:57.609590  211439 logs.go:282] 1 containers: [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232]
	I1217 00:40:57.609647  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:57.613442  211439 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:40:57.613493  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:40:57.647192  211439 cri.go:89] found id: ""
	I1217 00:40:57.647215  211439 logs.go:282] 0 containers: []
	W1217 00:40:57.647223  211439 logs.go:284] No container was found matching "etcd"
	I1217 00:40:57.647229  211439 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:40:57.647281  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:40:57.681086  211439 cri.go:89] found id: ""
	I1217 00:40:57.681113  211439 logs.go:282] 0 containers: []
	W1217 00:40:57.681123  211439 logs.go:284] No container was found matching "coredns"
	I1217 00:40:57.681130  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:40:57.681190  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:40:57.714686  211439 cri.go:89] found id: "4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:57.714712  211439 cri.go:89] found id: ""
	I1217 00:40:57.714720  211439 logs.go:282] 1 containers: [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb]
	I1217 00:40:57.714773  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:57.718393  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:40:57.718442  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:40:57.750604  211439 cri.go:89] found id: ""
	I1217 00:40:57.750624  211439 logs.go:282] 0 containers: []
	W1217 00:40:57.750632  211439 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:40:57.750637  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:40:57.750681  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:40:57.784453  211439 cri.go:89] found id: "a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:57.784475  211439 cri.go:89] found id: ""
	I1217 00:40:57.784487  211439 logs.go:282] 1 containers: [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524]
	I1217 00:40:57.784545  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:40:57.788202  211439 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:40:57.788251  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:40:57.822467  211439 cri.go:89] found id: ""
	I1217 00:40:57.822488  211439 logs.go:282] 0 containers: []
	W1217 00:40:57.822496  211439 logs.go:284] No container was found matching "kindnet"
	I1217 00:40:57.822502  211439 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:40:57.822565  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:40:57.856270  211439 cri.go:89] found id: ""
	I1217 00:40:57.856296  211439 logs.go:282] 0 containers: []
	W1217 00:40:57.856305  211439 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:40:57.856317  211439 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:40:57.856334  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:40:57.904960  211439 logs.go:123] Gathering logs for container status ...
	I1217 00:40:57.904989  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:40:57.942251  211439 logs.go:123] Gathering logs for kubelet ...
	I1217 00:40:57.942277  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:40:58.035133  211439 logs.go:123] Gathering logs for dmesg ...
	I1217 00:40:58.035167  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:40:58.051184  211439 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:40:58.051207  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:40:58.107160  211439 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:40:58.107180  211439 logs.go:123] Gathering logs for kube-apiserver [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232] ...
	I1217 00:40:58.107196  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:40:58.145514  211439 logs.go:123] Gathering logs for kube-scheduler [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb] ...
	I1217 00:40:58.145544  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:40:58.222553  211439 logs.go:123] Gathering logs for kube-controller-manager [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524] ...
	I1217 00:40:58.222586  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:40:59.312313  224114 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 00:40:59.312710  224114 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 00:40:59.312760  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:40:59.312865  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:40:59.339499  224114 cri.go:89] found id: "a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22"
	I1217 00:40:59.339517  224114 cri.go:89] found id: ""
	I1217 00:40:59.339524  224114 logs.go:282] 1 containers: [a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22]
	I1217 00:40:59.339570  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:59.343363  224114 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:40:59.343413  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:40:59.368910  224114 cri.go:89] found id: ""
	I1217 00:40:59.368934  224114 logs.go:282] 0 containers: []
	W1217 00:40:59.368942  224114 logs.go:284] No container was found matching "etcd"
	I1217 00:40:59.368953  224114 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:40:59.369026  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:40:59.395047  224114 cri.go:89] found id: ""
	I1217 00:40:59.395070  224114 logs.go:282] 0 containers: []
	W1217 00:40:59.395077  224114 logs.go:284] No container was found matching "coredns"
	I1217 00:40:59.395083  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:40:59.395132  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:40:59.420719  224114 cri.go:89] found id: "935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:40:59.420744  224114 cri.go:89] found id: ""
	I1217 00:40:59.420753  224114 logs.go:282] 1 containers: [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a]
	I1217 00:40:59.420810  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:59.424519  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:40:59.424570  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:40:59.449177  224114 cri.go:89] found id: ""
	I1217 00:40:59.449200  224114 logs.go:282] 0 containers: []
	W1217 00:40:59.449210  224114 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:40:59.449217  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:40:59.449272  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:40:59.474604  224114 cri.go:89] found id: "dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:40:59.474622  224114 cri.go:89] found id: ""
	I1217 00:40:59.474629  224114 logs.go:282] 1 containers: [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829]
	I1217 00:40:59.474669  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:40:59.478601  224114 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:40:59.478654  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:40:59.503883  224114 cri.go:89] found id: ""
	I1217 00:40:59.503906  224114 logs.go:282] 0 containers: []
	W1217 00:40:59.503917  224114 logs.go:284] No container was found matching "kindnet"
	I1217 00:40:59.503922  224114 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:40:59.503980  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:40:59.528458  224114 cri.go:89] found id: ""
	I1217 00:40:59.528478  224114 logs.go:282] 0 containers: []
	W1217 00:40:59.528485  224114 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:40:59.528493  224114 logs.go:123] Gathering logs for kube-apiserver [a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22] ...
	I1217 00:40:59.528503  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22"
	I1217 00:40:59.558644  224114 logs.go:123] Gathering logs for kube-scheduler [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a] ...
	I1217 00:40:59.558667  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:40:59.584276  224114 logs.go:123] Gathering logs for kube-controller-manager [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829] ...
	I1217 00:40:59.584302  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:40:59.612969  224114 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:40:59.613024  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:40:59.667651  224114 logs.go:123] Gathering logs for container status ...
	I1217 00:40:59.667712  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:40:59.697093  224114 logs.go:123] Gathering logs for kubelet ...
	I1217 00:40:59.697123  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:40:59.783530  224114 logs.go:123] Gathering logs for dmesg ...
	I1217 00:40:59.783561  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:40:59.797739  224114 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:40:59.797766  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:40:59.852370  224114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1217 00:41:00.433037  260378 node_ready.go:57] node "old-k8s-version-742860" has "Ready":"False" status (will retry)
	W1217 00:41:02.433532  260378 node_ready.go:57] node "old-k8s-version-742860" has "Ready":"False" status (will retry)
	I1217 00:41:00.759382  211439 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:41:00.759856  211439 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1217 00:41:00.759915  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:41:00.759972  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:41:00.795947  211439 cri.go:89] found id: "81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:41:00.795970  211439 cri.go:89] found id: ""
	I1217 00:41:00.795980  211439 logs.go:282] 1 containers: [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232]
	I1217 00:41:00.796057  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:41:00.800210  211439 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:41:00.800276  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:41:00.834851  211439 cri.go:89] found id: ""
	I1217 00:41:00.834877  211439 logs.go:282] 0 containers: []
	W1217 00:41:00.834887  211439 logs.go:284] No container was found matching "etcd"
	I1217 00:41:00.834893  211439 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:41:00.834955  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:41:00.869763  211439 cri.go:89] found id: ""
	I1217 00:41:00.869793  211439 logs.go:282] 0 containers: []
	W1217 00:41:00.869804  211439 logs.go:284] No container was found matching "coredns"
	I1217 00:41:00.869811  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:41:00.869856  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:41:00.903314  211439 cri.go:89] found id: "4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:41:00.903337  211439 cri.go:89] found id: ""
	I1217 00:41:00.903347  211439 logs.go:282] 1 containers: [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb]
	I1217 00:41:00.903406  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:41:00.907116  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:41:00.907173  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:41:00.942409  211439 cri.go:89] found id: ""
	I1217 00:41:00.942434  211439 logs.go:282] 0 containers: []
	W1217 00:41:00.942442  211439 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:41:00.942447  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:41:00.942489  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:41:00.976840  211439 cri.go:89] found id: "a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:41:00.976863  211439 cri.go:89] found id: ""
	I1217 00:41:00.976870  211439 logs.go:282] 1 containers: [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524]
	I1217 00:41:00.976915  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:41:00.980825  211439 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:41:00.980885  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:41:01.016362  211439 cri.go:89] found id: ""
	I1217 00:41:01.016390  211439 logs.go:282] 0 containers: []
	W1217 00:41:01.016402  211439 logs.go:284] No container was found matching "kindnet"
	I1217 00:41:01.016413  211439 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:41:01.016480  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:41:01.050293  211439 cri.go:89] found id: ""
	I1217 00:41:01.050317  211439 logs.go:282] 0 containers: []
	W1217 00:41:01.050327  211439 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:41:01.050339  211439 logs.go:123] Gathering logs for dmesg ...
	I1217 00:41:01.050352  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:41:01.064985  211439 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:41:01.065032  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:41:01.120888  211439 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:41:01.120911  211439 logs.go:123] Gathering logs for kube-apiserver [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232] ...
	I1217 00:41:01.120923  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:41:01.156695  211439 logs.go:123] Gathering logs for kube-scheduler [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb] ...
	I1217 00:41:01.156720  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:41:01.234694  211439 logs.go:123] Gathering logs for kube-controller-manager [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524] ...
	I1217 00:41:01.234724  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:41:01.270458  211439 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:41:01.270487  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:41:01.320041  211439 logs.go:123] Gathering logs for container status ...
	I1217 00:41:01.320071  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:41:01.356751  211439 logs.go:123] Gathering logs for kubelet ...
	I1217 00:41:01.356774  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:41:03.952636  211439 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:41:03.953027  211439 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1217 00:41:03.953086  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:41:03.953134  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:41:03.989166  211439 cri.go:89] found id: "81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:41:03.989190  211439 cri.go:89] found id: ""
	I1217 00:41:03.989199  211439 logs.go:282] 1 containers: [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232]
	I1217 00:41:03.989258  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:41:03.992963  211439 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:41:03.993034  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:41:04.027465  211439 cri.go:89] found id: ""
	I1217 00:41:04.027484  211439 logs.go:282] 0 containers: []
	W1217 00:41:04.027492  211439 logs.go:284] No container was found matching "etcd"
	I1217 00:41:04.027498  211439 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:41:04.027539  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:41:04.061844  211439 cri.go:89] found id: ""
	I1217 00:41:04.061867  211439 logs.go:282] 0 containers: []
	W1217 00:41:04.061875  211439 logs.go:284] No container was found matching "coredns"
	I1217 00:41:04.061881  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:41:04.061938  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:41:04.096405  211439 cri.go:89] found id: "4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:41:04.096423  211439 cri.go:89] found id: ""
	I1217 00:41:04.096430  211439 logs.go:282] 1 containers: [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb]
	I1217 00:41:04.096472  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:41:04.100049  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:41:04.100103  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:41:04.134420  211439 cri.go:89] found id: ""
	I1217 00:41:04.134445  211439 logs.go:282] 0 containers: []
	W1217 00:41:04.134458  211439 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:41:04.134465  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:41:04.134512  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:41:04.168320  211439 cri.go:89] found id: "a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:41:04.168340  211439 cri.go:89] found id: ""
	I1217 00:41:04.168347  211439 logs.go:282] 1 containers: [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524]
	I1217 00:41:04.168400  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:41:04.171916  211439 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:41:04.171981  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:41:04.205259  211439 cri.go:89] found id: ""
	I1217 00:41:04.205281  211439 logs.go:282] 0 containers: []
	W1217 00:41:04.205291  211439 logs.go:284] No container was found matching "kindnet"
	I1217 00:41:04.205310  211439 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:41:04.205365  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:41:04.239182  211439 cri.go:89] found id: ""
	I1217 00:41:04.239207  211439 logs.go:282] 0 containers: []
	W1217 00:41:04.239216  211439 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:41:04.239225  211439 logs.go:123] Gathering logs for kubelet ...
	I1217 00:41:04.239236  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:41:04.330591  211439 logs.go:123] Gathering logs for dmesg ...
	I1217 00:41:04.330623  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:41:04.345900  211439 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:41:04.345925  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:41:04.402896  211439 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:41:04.402918  211439 logs.go:123] Gathering logs for kube-apiserver [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232] ...
	I1217 00:41:04.402931  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:41:04.440338  211439 logs.go:123] Gathering logs for kube-scheduler [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb] ...
	I1217 00:41:04.440361  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:41:04.515442  211439 logs.go:123] Gathering logs for kube-controller-manager [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524] ...
	I1217 00:41:04.515474  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:41:04.549933  211439 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:41:04.549963  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:41:04.600104  211439 logs.go:123] Gathering logs for container status ...
	I1217 00:41:04.600131  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:41:02.353140  224114 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 00:41:02.353621  224114 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 00:41:02.353671  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:41:02.353716  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:41:02.381835  224114 cri.go:89] found id: "a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22"
	I1217 00:41:02.381866  224114 cri.go:89] found id: ""
	I1217 00:41:02.381873  224114 logs.go:282] 1 containers: [a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22]
	I1217 00:41:02.381933  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:41:02.385800  224114 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:41:02.385862  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:41:02.412240  224114 cri.go:89] found id: ""
	I1217 00:41:02.412269  224114 logs.go:282] 0 containers: []
	W1217 00:41:02.412281  224114 logs.go:284] No container was found matching "etcd"
	I1217 00:41:02.412289  224114 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:41:02.412343  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:41:02.439209  224114 cri.go:89] found id: ""
	I1217 00:41:02.439235  224114 logs.go:282] 0 containers: []
	W1217 00:41:02.439247  224114 logs.go:284] No container was found matching "coredns"
	I1217 00:41:02.439254  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:41:02.439313  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:41:02.464565  224114 cri.go:89] found id: "935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:41:02.464588  224114 cri.go:89] found id: ""
	I1217 00:41:02.464598  224114 logs.go:282] 1 containers: [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a]
	I1217 00:41:02.464655  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:41:02.468741  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:41:02.468804  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:41:02.494205  224114 cri.go:89] found id: ""
	I1217 00:41:02.494224  224114 logs.go:282] 0 containers: []
	W1217 00:41:02.494231  224114 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:41:02.494237  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:41:02.494290  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:41:02.518616  224114 cri.go:89] found id: "dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:41:02.518634  224114 cri.go:89] found id: ""
	I1217 00:41:02.518642  224114 logs.go:282] 1 containers: [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829]
	I1217 00:41:02.518698  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:41:02.522363  224114 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:41:02.522419  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:41:02.547818  224114 cri.go:89] found id: ""
	I1217 00:41:02.547839  224114 logs.go:282] 0 containers: []
	W1217 00:41:02.547856  224114 logs.go:284] No container was found matching "kindnet"
	I1217 00:41:02.547862  224114 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:41:02.547909  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:41:02.573099  224114 cri.go:89] found id: ""
	I1217 00:41:02.573131  224114 logs.go:282] 0 containers: []
	W1217 00:41:02.573142  224114 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:41:02.573152  224114 logs.go:123] Gathering logs for container status ...
	I1217 00:41:02.573168  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:41:02.601034  224114 logs.go:123] Gathering logs for kubelet ...
	I1217 00:41:02.601063  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:41:02.683723  224114 logs.go:123] Gathering logs for dmesg ...
	I1217 00:41:02.683755  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:41:02.697209  224114 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:41:02.697231  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:41:02.752253  224114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:41:02.752271  224114 logs.go:123] Gathering logs for kube-apiserver [a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22] ...
	I1217 00:41:02.752286  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22"
	I1217 00:41:02.782842  224114 logs.go:123] Gathering logs for kube-scheduler [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a] ...
	I1217 00:41:02.782873  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:41:02.811154  224114 logs.go:123] Gathering logs for kube-controller-manager [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829] ...
	I1217 00:41:02.811177  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:41:02.836228  224114 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:41:02.836251  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:41:05.392747  224114 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 00:41:05.393194  224114 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 00:41:05.393241  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:41:05.393298  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:41:05.419848  224114 cri.go:89] found id: "a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22"
	I1217 00:41:05.419867  224114 cri.go:89] found id: ""
	I1217 00:41:05.419874  224114 logs.go:282] 1 containers: [a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22]
	I1217 00:41:05.419927  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:41:05.423738  224114 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:41:05.423833  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:41:05.449883  224114 cri.go:89] found id: ""
	I1217 00:41:05.449906  224114 logs.go:282] 0 containers: []
	W1217 00:41:05.449914  224114 logs.go:284] No container was found matching "etcd"
	I1217 00:41:05.449920  224114 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:41:05.449968  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:41:05.475721  224114 cri.go:89] found id: ""
	I1217 00:41:05.475747  224114 logs.go:282] 0 containers: []
	W1217 00:41:05.475756  224114 logs.go:284] No container was found matching "coredns"
	I1217 00:41:05.475761  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:41:05.475806  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:41:05.501646  224114 cri.go:89] found id: "935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:41:05.501667  224114 cri.go:89] found id: ""
	I1217 00:41:05.501676  224114 logs.go:282] 1 containers: [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a]
	I1217 00:41:05.501733  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:41:05.505519  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:41:05.505563  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:41:05.530319  224114 cri.go:89] found id: ""
	I1217 00:41:05.530347  224114 logs.go:282] 0 containers: []
	W1217 00:41:05.530354  224114 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:41:05.530359  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:41:05.530404  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:41:05.555749  224114 cri.go:89] found id: "dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:41:05.555772  224114 cri.go:89] found id: ""
	I1217 00:41:05.555782  224114 logs.go:282] 1 containers: [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829]
	I1217 00:41:05.555833  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:41:05.559677  224114 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:41:05.559736  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:41:05.584642  224114 cri.go:89] found id: ""
	I1217 00:41:05.584661  224114 logs.go:282] 0 containers: []
	W1217 00:41:05.584669  224114 logs.go:284] No container was found matching "kindnet"
	I1217 00:41:05.584673  224114 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:41:05.584718  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:41:05.610186  224114 cri.go:89] found id: ""
	I1217 00:41:05.610212  224114 logs.go:282] 0 containers: []
	W1217 00:41:05.610223  224114 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:41:05.610234  224114 logs.go:123] Gathering logs for kube-controller-manager [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829] ...
	I1217 00:41:05.610246  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:41:05.636048  224114 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:41:05.636083  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:41:05.688752  224114 logs.go:123] Gathering logs for container status ...
	I1217 00:41:05.688779  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:41:05.719830  224114 logs.go:123] Gathering logs for kubelet ...
	I1217 00:41:05.719854  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:41:05.805733  224114 logs.go:123] Gathering logs for dmesg ...
	I1217 00:41:05.805764  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:41:05.819733  224114 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:41:05.819757  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:41:05.873651  224114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:41:05.873670  224114 logs.go:123] Gathering logs for kube-apiserver [a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22] ...
	I1217 00:41:05.873682  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22"
	I1217 00:41:05.903578  224114 logs.go:123] Gathering logs for kube-scheduler [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a] ...
	I1217 00:41:05.903612  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	W1217 00:41:04.433962  260378 node_ready.go:57] node "old-k8s-version-742860" has "Ready":"False" status (will retry)
	I1217 00:41:05.933419  260378 node_ready.go:49] node "old-k8s-version-742860" is "Ready"
	I1217 00:41:05.933450  260378 node_ready.go:38] duration metric: took 12.503551649s for node "old-k8s-version-742860" to be "Ready" ...
	I1217 00:41:05.933467  260378 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:41:05.933519  260378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:05.946689  260378 api_server.go:72] duration metric: took 12.922212104s to wait for apiserver process to appear ...
	I1217 00:41:05.946716  260378 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:41:05.946731  260378 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 00:41:05.952322  260378 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1217 00:41:05.953439  260378 api_server.go:141] control plane version: v1.28.0
	I1217 00:41:05.953457  260378 api_server.go:131] duration metric: took 6.736448ms to wait for apiserver health ...
	I1217 00:41:05.953465  260378 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:41:05.956557  260378 system_pods.go:59] 8 kube-system pods found
	I1217 00:41:05.956588  260378 system_pods.go:61] "coredns-5dd5756b68-zsfnr" [081004df-4dc4-442c-9c8a-0bb2ea2f3e06] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:41:05.956600  260378 system_pods.go:61] "etcd-old-k8s-version-742860" [c2fcc59d-32d4-48b7-badc-433d216c5607] Running
	I1217 00:41:05.956608  260378 system_pods.go:61] "kindnet-9sklv" [f0711ffd-97e7-4981-8eb2-ae13de35c604] Running
	I1217 00:41:05.956616  260378 system_pods.go:61] "kube-apiserver-old-k8s-version-742860" [221db466-6fee-47a5-aced-52e788058377] Running
	I1217 00:41:05.956622  260378 system_pods.go:61] "kube-controller-manager-old-k8s-version-742860" [8aee31ee-520d-4d4f-a748-60ac939f4a02] Running
	I1217 00:41:05.956631  260378 system_pods.go:61] "kube-proxy-ltxr5" [4cc26e30-3dbe-46b3-ad66-547936b92c1e] Running
	I1217 00:41:05.956636  260378 system_pods.go:61] "kube-scheduler-old-k8s-version-742860" [2f249c32-3345-41be-865b-41c56b6874a6] Running
	I1217 00:41:05.956643  260378 system_pods.go:61] "storage-provisioner" [69871fec-dde0-4ea2-9293-5aa6b43dd313] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:41:05.956650  260378 system_pods.go:74] duration metric: took 3.179952ms to wait for pod list to return data ...
	I1217 00:41:05.956660  260378 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:41:05.958367  260378 default_sa.go:45] found service account: "default"
	I1217 00:41:05.958388  260378 default_sa.go:55] duration metric: took 1.721487ms for default service account to be created ...
	I1217 00:41:05.958395  260378 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 00:41:05.961452  260378 system_pods.go:86] 8 kube-system pods found
	I1217 00:41:05.961483  260378 system_pods.go:89] "coredns-5dd5756b68-zsfnr" [081004df-4dc4-442c-9c8a-0bb2ea2f3e06] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:41:05.961490  260378 system_pods.go:89] "etcd-old-k8s-version-742860" [c2fcc59d-32d4-48b7-badc-433d216c5607] Running
	I1217 00:41:05.961510  260378 system_pods.go:89] "kindnet-9sklv" [f0711ffd-97e7-4981-8eb2-ae13de35c604] Running
	I1217 00:41:05.961520  260378 system_pods.go:89] "kube-apiserver-old-k8s-version-742860" [221db466-6fee-47a5-aced-52e788058377] Running
	I1217 00:41:05.961527  260378 system_pods.go:89] "kube-controller-manager-old-k8s-version-742860" [8aee31ee-520d-4d4f-a748-60ac939f4a02] Running
	I1217 00:41:05.961538  260378 system_pods.go:89] "kube-proxy-ltxr5" [4cc26e30-3dbe-46b3-ad66-547936b92c1e] Running
	I1217 00:41:05.961554  260378 system_pods.go:89] "kube-scheduler-old-k8s-version-742860" [2f249c32-3345-41be-865b-41c56b6874a6] Running
	I1217 00:41:05.961565  260378 system_pods.go:89] "storage-provisioner" [69871fec-dde0-4ea2-9293-5aa6b43dd313] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:41:05.961588  260378 retry.go:31] will retry after 219.399864ms: missing components: kube-dns
	I1217 00:41:06.185778  260378 system_pods.go:86] 8 kube-system pods found
	I1217 00:41:06.185807  260378 system_pods.go:89] "coredns-5dd5756b68-zsfnr" [081004df-4dc4-442c-9c8a-0bb2ea2f3e06] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:41:06.185812  260378 system_pods.go:89] "etcd-old-k8s-version-742860" [c2fcc59d-32d4-48b7-badc-433d216c5607] Running
	I1217 00:41:06.185819  260378 system_pods.go:89] "kindnet-9sklv" [f0711ffd-97e7-4981-8eb2-ae13de35c604] Running
	I1217 00:41:06.185822  260378 system_pods.go:89] "kube-apiserver-old-k8s-version-742860" [221db466-6fee-47a5-aced-52e788058377] Running
	I1217 00:41:06.185826  260378 system_pods.go:89] "kube-controller-manager-old-k8s-version-742860" [8aee31ee-520d-4d4f-a748-60ac939f4a02] Running
	I1217 00:41:06.185838  260378 system_pods.go:89] "kube-proxy-ltxr5" [4cc26e30-3dbe-46b3-ad66-547936b92c1e] Running
	I1217 00:41:06.185841  260378 system_pods.go:89] "kube-scheduler-old-k8s-version-742860" [2f249c32-3345-41be-865b-41c56b6874a6] Running
	I1217 00:41:06.185848  260378 system_pods.go:89] "storage-provisioner" [69871fec-dde0-4ea2-9293-5aa6b43dd313] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:41:06.185865  260378 retry.go:31] will retry after 312.341984ms: missing components: kube-dns
	I1217 00:41:06.504229  260378 system_pods.go:86] 8 kube-system pods found
	I1217 00:41:06.504268  260378 system_pods.go:89] "coredns-5dd5756b68-zsfnr" [081004df-4dc4-442c-9c8a-0bb2ea2f3e06] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:41:06.504278  260378 system_pods.go:89] "etcd-old-k8s-version-742860" [c2fcc59d-32d4-48b7-badc-433d216c5607] Running
	I1217 00:41:06.504288  260378 system_pods.go:89] "kindnet-9sklv" [f0711ffd-97e7-4981-8eb2-ae13de35c604] Running
	I1217 00:41:06.504293  260378 system_pods.go:89] "kube-apiserver-old-k8s-version-742860" [221db466-6fee-47a5-aced-52e788058377] Running
	I1217 00:41:06.504300  260378 system_pods.go:89] "kube-controller-manager-old-k8s-version-742860" [8aee31ee-520d-4d4f-a748-60ac939f4a02] Running
	I1217 00:41:06.504308  260378 system_pods.go:89] "kube-proxy-ltxr5" [4cc26e30-3dbe-46b3-ad66-547936b92c1e] Running
	I1217 00:41:06.504313  260378 system_pods.go:89] "kube-scheduler-old-k8s-version-742860" [2f249c32-3345-41be-865b-41c56b6874a6] Running
	I1217 00:41:06.504322  260378 system_pods.go:89] "storage-provisioner" [69871fec-dde0-4ea2-9293-5aa6b43dd313] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:41:06.504338  260378 retry.go:31] will retry after 473.917458ms: missing components: kube-dns
	I1217 00:41:06.981896  260378 system_pods.go:86] 8 kube-system pods found
	I1217 00:41:06.981920  260378 system_pods.go:89] "coredns-5dd5756b68-zsfnr" [081004df-4dc4-442c-9c8a-0bb2ea2f3e06] Running
	I1217 00:41:06.981926  260378 system_pods.go:89] "etcd-old-k8s-version-742860" [c2fcc59d-32d4-48b7-badc-433d216c5607] Running
	I1217 00:41:06.981929  260378 system_pods.go:89] "kindnet-9sklv" [f0711ffd-97e7-4981-8eb2-ae13de35c604] Running
	I1217 00:41:06.981933  260378 system_pods.go:89] "kube-apiserver-old-k8s-version-742860" [221db466-6fee-47a5-aced-52e788058377] Running
	I1217 00:41:06.981937  260378 system_pods.go:89] "kube-controller-manager-old-k8s-version-742860" [8aee31ee-520d-4d4f-a748-60ac939f4a02] Running
	I1217 00:41:06.981941  260378 system_pods.go:89] "kube-proxy-ltxr5" [4cc26e30-3dbe-46b3-ad66-547936b92c1e] Running
	I1217 00:41:06.981946  260378 system_pods.go:89] "kube-scheduler-old-k8s-version-742860" [2f249c32-3345-41be-865b-41c56b6874a6] Running
	I1217 00:41:06.981949  260378 system_pods.go:89] "storage-provisioner" [69871fec-dde0-4ea2-9293-5aa6b43dd313] Running
	I1217 00:41:06.981956  260378 system_pods.go:126] duration metric: took 1.023556562s to wait for k8s-apps to be running ...
	I1217 00:41:06.981963  260378 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 00:41:06.982024  260378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:41:06.994623  260378 system_svc.go:56] duration metric: took 12.64955ms WaitForService to wait for kubelet
	I1217 00:41:06.994654  260378 kubeadm.go:587] duration metric: took 13.970181939s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:41:06.994673  260378 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:41:06.996923  260378 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 00:41:06.996950  260378 node_conditions.go:123] node cpu capacity is 8
	I1217 00:41:06.996968  260378 node_conditions.go:105] duration metric: took 2.2898ms to run NodePressure ...
	I1217 00:41:06.996981  260378 start.go:242] waiting for startup goroutines ...
	I1217 00:41:06.997005  260378 start.go:247] waiting for cluster config update ...
	I1217 00:41:06.997021  260378 start.go:256] writing updated cluster config ...
	I1217 00:41:06.997261  260378 ssh_runner.go:195] Run: rm -f paused
	I1217 00:41:07.000756  260378 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:41:07.004366  260378 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-zsfnr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:41:07.008371  260378 pod_ready.go:94] pod "coredns-5dd5756b68-zsfnr" is "Ready"
	I1217 00:41:07.008396  260378 pod_ready.go:86] duration metric: took 4.013784ms for pod "coredns-5dd5756b68-zsfnr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:41:07.010595  260378 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-742860" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:41:07.013962  260378 pod_ready.go:94] pod "etcd-old-k8s-version-742860" is "Ready"
	I1217 00:41:07.013978  260378 pod_ready.go:86] duration metric: took 3.363441ms for pod "etcd-old-k8s-version-742860" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:41:07.016145  260378 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-742860" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:41:07.019575  260378 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-742860" is "Ready"
	I1217 00:41:07.019590  260378 pod_ready.go:86] duration metric: took 3.429362ms for pod "kube-apiserver-old-k8s-version-742860" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:41:07.021657  260378 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-742860" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:41:07.404296  260378 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-742860" is "Ready"
	I1217 00:41:07.404318  260378 pod_ready.go:86] duration metric: took 382.644661ms for pod "kube-controller-manager-old-k8s-version-742860" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:41:07.607571  260378 pod_ready.go:83] waiting for pod "kube-proxy-ltxr5" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:41:08.005234  260378 pod_ready.go:94] pod "kube-proxy-ltxr5" is "Ready"
	I1217 00:41:08.005258  260378 pod_ready.go:86] duration metric: took 397.666404ms for pod "kube-proxy-ltxr5" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:41:08.205681  260378 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-742860" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:41:08.604666  260378 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-742860" is "Ready"
	I1217 00:41:08.604703  260378 pod_ready.go:86] duration metric: took 399.00137ms for pod "kube-scheduler-old-k8s-version-742860" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:41:08.604716  260378 pod_ready.go:40] duration metric: took 1.603934486s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:41:08.650496  260378 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1217 00:41:08.652358  260378 out.go:203] 
	W1217 00:41:08.653576  260378 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1217 00:41:08.654859  260378 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1217 00:41:08.656439  260378 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-742860" cluster and "default" namespace by default
	I1217 00:41:07.139220  211439 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:41:07.139614  211439 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1217 00:41:07.139677  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:41:07.139735  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:41:07.174504  211439 cri.go:89] found id: "81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:41:07.174522  211439 cri.go:89] found id: ""
	I1217 00:41:07.174528  211439 logs.go:282] 1 containers: [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232]
	I1217 00:41:07.174586  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:41:07.178201  211439 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:41:07.178256  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:41:07.212757  211439 cri.go:89] found id: ""
	I1217 00:41:07.212776  211439 logs.go:282] 0 containers: []
	W1217 00:41:07.212784  211439 logs.go:284] No container was found matching "etcd"
	I1217 00:41:07.212789  211439 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:41:07.212839  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:41:07.245870  211439 cri.go:89] found id: ""
	I1217 00:41:07.245889  211439 logs.go:282] 0 containers: []
	W1217 00:41:07.245896  211439 logs.go:284] No container was found matching "coredns"
	I1217 00:41:07.245901  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:41:07.245951  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:41:07.279567  211439 cri.go:89] found id: "4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:41:07.279591  211439 cri.go:89] found id: ""
	I1217 00:41:07.279601  211439 logs.go:282] 1 containers: [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb]
	I1217 00:41:07.279649  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:41:07.283189  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:41:07.283242  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:41:07.315583  211439 cri.go:89] found id: ""
	I1217 00:41:07.315602  211439 logs.go:282] 0 containers: []
	W1217 00:41:07.315609  211439 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:41:07.315613  211439 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:41:07.315664  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:41:07.348251  211439 cri.go:89] found id: "a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:41:07.348273  211439 cri.go:89] found id: ""
	I1217 00:41:07.348282  211439 logs.go:282] 1 containers: [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524]
	I1217 00:41:07.348324  211439 ssh_runner.go:195] Run: which crictl
	I1217 00:41:07.351890  211439 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:41:07.351947  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:41:07.384842  211439 cri.go:89] found id: ""
	I1217 00:41:07.384861  211439 logs.go:282] 0 containers: []
	W1217 00:41:07.384868  211439 logs.go:284] No container was found matching "kindnet"
	I1217 00:41:07.384874  211439 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:41:07.384926  211439 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:41:07.418258  211439 cri.go:89] found id: ""
	I1217 00:41:07.418283  211439 logs.go:282] 0 containers: []
	W1217 00:41:07.418294  211439 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:41:07.418305  211439 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:41:07.418317  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:41:07.469051  211439 logs.go:123] Gathering logs for container status ...
	I1217 00:41:07.469075  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:41:07.505780  211439 logs.go:123] Gathering logs for kubelet ...
	I1217 00:41:07.505807  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:41:07.600275  211439 logs.go:123] Gathering logs for dmesg ...
	I1217 00:41:07.600309  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:41:07.616331  211439 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:41:07.616353  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:41:07.673750  211439 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:41:07.673772  211439 logs.go:123] Gathering logs for kube-apiserver [81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232] ...
	I1217 00:41:07.673784  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81770d70df736c55a0ce9c6a787d9a2c0ad3240f757d951f79d8bd010c312232"
	I1217 00:41:07.710587  211439 logs.go:123] Gathering logs for kube-scheduler [4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb] ...
	I1217 00:41:07.710614  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e190c0a9f421d12adcc9208d08ab91cb11f9e4325f737619a4434567dbccaeb"
	I1217 00:41:07.791384  211439 logs.go:123] Gathering logs for kube-controller-manager [a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524] ...
	I1217 00:41:07.791421  211439 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17d3e4d9deea2292fd636997da3ccec3cbb8173240b195f2d7308083588d524"
	I1217 00:41:10.332268  211439 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:41:10.332669  211439 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1217 00:41:10.332731  211439 kubeadm.go:602] duration metric: took 4m3.277386574s to restartPrimaryControlPlane
	W1217 00:41:10.332781  211439 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 00:41:10.332845  211439 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 00:41:10.965504  211439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:41:10.977330  211439 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:41:10.986260  211439 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:41:10.986315  211439 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:41:10.995170  211439 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:41:10.995190  211439 kubeadm.go:158] found existing configuration files:
	
	I1217 00:41:10.995231  211439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 00:41:11.004184  211439 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:41:11.004250  211439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:41:11.014509  211439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 00:41:11.023365  211439 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:41:11.023419  211439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:41:11.031816  211439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 00:41:11.040220  211439 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:41:11.040261  211439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:41:11.048927  211439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 00:41:11.057302  211439 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:41:11.057343  211439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:41:11.065472  211439 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:41:11.100452  211439 kubeadm.go:319] [init] Using Kubernetes version: v1.32.0
	I1217 00:41:11.100529  211439 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:41:11.116227  211439 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:41:11.116303  211439 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 00:41:11.116358  211439 kubeadm.go:319] OS: Linux
	I1217 00:41:11.116428  211439 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:41:11.116480  211439 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:41:11.116548  211439 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:41:11.116609  211439 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:41:11.116692  211439 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:41:11.116777  211439 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:41:11.116862  211439 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:41:11.116925  211439 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 00:41:11.171867  211439 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:41:11.172046  211439 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:41:11.172174  211439 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:41:11.178840  211439 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:41:08.438278  224114 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 00:41:08.438704  224114 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 00:41:08.438763  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:41:08.438846  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:41:08.467234  224114 cri.go:89] found id: "a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22"
	I1217 00:41:08.467255  224114 cri.go:89] found id: ""
	I1217 00:41:08.467265  224114 logs.go:282] 1 containers: [a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22]
	I1217 00:41:08.467322  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:41:08.471559  224114 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:41:08.471622  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:41:08.497111  224114 cri.go:89] found id: ""
	I1217 00:41:08.497135  224114 logs.go:282] 0 containers: []
	W1217 00:41:08.497146  224114 logs.go:284] No container was found matching "etcd"
	I1217 00:41:08.497153  224114 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:41:08.497213  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:41:08.523353  224114 cri.go:89] found id: ""
	I1217 00:41:08.523381  224114 logs.go:282] 0 containers: []
	W1217 00:41:08.523394  224114 logs.go:284] No container was found matching "coredns"
	I1217 00:41:08.523403  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:41:08.523462  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:41:08.550380  224114 cri.go:89] found id: "935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:41:08.550403  224114 cri.go:89] found id: ""
	I1217 00:41:08.550414  224114 logs.go:282] 1 containers: [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a]
	I1217 00:41:08.550478  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:41:08.554166  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:41:08.554221  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:41:08.579981  224114 cri.go:89] found id: ""
	I1217 00:41:08.580034  224114 logs.go:282] 0 containers: []
	W1217 00:41:08.580044  224114 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:41:08.580052  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:41:08.580177  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:41:08.605594  224114 cri.go:89] found id: "915b5ab76424f0e967e79964a7369a96b5e960e3c50f67c986abc539bf2b3e3d"
	I1217 00:41:08.605615  224114 cri.go:89] found id: "dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:41:08.605621  224114 cri.go:89] found id: ""
	I1217 00:41:08.605630  224114 logs.go:282] 2 containers: [915b5ab76424f0e967e79964a7369a96b5e960e3c50f67c986abc539bf2b3e3d dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829]
	I1217 00:41:08.605685  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:41:08.609699  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:41:08.613299  224114 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:41:08.613356  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:41:08.643306  224114 cri.go:89] found id: ""
	I1217 00:41:08.643332  224114 logs.go:282] 0 containers: []
	W1217 00:41:08.643343  224114 logs.go:284] No container was found matching "kindnet"
	I1217 00:41:08.643349  224114 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:41:08.643403  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:41:08.671865  224114 cri.go:89] found id: ""
	I1217 00:41:08.671890  224114 logs.go:282] 0 containers: []
	W1217 00:41:08.671898  224114 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:41:08.671915  224114 logs.go:123] Gathering logs for kubelet ...
	I1217 00:41:08.671928  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:41:08.769546  224114 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:41:08.769585  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:41:08.829478  224114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:41:08.829496  224114 logs.go:123] Gathering logs for kube-controller-manager [915b5ab76424f0e967e79964a7369a96b5e960e3c50f67c986abc539bf2b3e3d] ...
	I1217 00:41:08.829508  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 915b5ab76424f0e967e79964a7369a96b5e960e3c50f67c986abc539bf2b3e3d"
	I1217 00:41:08.861140  224114 logs.go:123] Gathering logs for kube-controller-manager [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829] ...
	I1217 00:41:08.861167  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:41:08.886701  224114 logs.go:123] Gathering logs for dmesg ...
	I1217 00:41:08.886727  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:41:08.900501  224114 logs.go:123] Gathering logs for kube-apiserver [a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22] ...
	I1217 00:41:08.900526  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22"
	I1217 00:41:08.931589  224114 logs.go:123] Gathering logs for kube-scheduler [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a] ...
	I1217 00:41:08.931619  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:41:08.958849  224114 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:41:08.958872  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:41:09.014465  224114 logs.go:123] Gathering logs for container status ...
	I1217 00:41:09.014504  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:41:11.546572  224114 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 00:41:11.547051  224114 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 00:41:11.547114  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:41:11.547172  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:41:11.573764  224114 cri.go:89] found id: "a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22"
	I1217 00:41:11.573784  224114 cri.go:89] found id: ""
	I1217 00:41:11.573793  224114 logs.go:282] 1 containers: [a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22]
	I1217 00:41:11.573852  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:41:11.578277  224114 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:41:11.578407  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:41:11.603733  224114 cri.go:89] found id: ""
	I1217 00:41:11.603758  224114 logs.go:282] 0 containers: []
	W1217 00:41:11.603769  224114 logs.go:284] No container was found matching "etcd"
	I1217 00:41:11.603777  224114 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:41:11.603840  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:41:11.628361  224114 cri.go:89] found id: ""
	I1217 00:41:11.628384  224114 logs.go:282] 0 containers: []
	W1217 00:41:11.628394  224114 logs.go:284] No container was found matching "coredns"
	I1217 00:41:11.628402  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:41:11.628457  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:41:11.655843  224114 cri.go:89] found id: "935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:41:11.655864  224114 cri.go:89] found id: ""
	I1217 00:41:11.655873  224114 logs.go:282] 1 containers: [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a]
	I1217 00:41:11.655933  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:41:11.659820  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:41:11.659895  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:41:11.684368  224114 cri.go:89] found id: ""
	I1217 00:41:11.684391  224114 logs.go:282] 0 containers: []
	W1217 00:41:11.684401  224114 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:41:11.684408  224114 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:41:11.684466  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:41:11.710373  224114 cri.go:89] found id: "915b5ab76424f0e967e79964a7369a96b5e960e3c50f67c986abc539bf2b3e3d"
	I1217 00:41:11.710391  224114 cri.go:89] found id: "dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:41:11.710395  224114 cri.go:89] found id: ""
	I1217 00:41:11.710402  224114 logs.go:282] 2 containers: [915b5ab76424f0e967e79964a7369a96b5e960e3c50f67c986abc539bf2b3e3d dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829]
	I1217 00:41:11.710455  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:41:11.714175  224114 ssh_runner.go:195] Run: which crictl
	I1217 00:41:11.717760  224114 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:41:11.717812  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:41:11.744205  224114 cri.go:89] found id: ""
	I1217 00:41:11.744229  224114 logs.go:282] 0 containers: []
	W1217 00:41:11.744238  224114 logs.go:284] No container was found matching "kindnet"
	I1217 00:41:11.744245  224114 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 00:41:11.744298  224114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 00:41:11.770516  224114 cri.go:89] found id: ""
	I1217 00:41:11.770541  224114 logs.go:282] 0 containers: []
	W1217 00:41:11.770552  224114 logs.go:284] No container was found matching "storage-provisioner"
	I1217 00:41:11.770569  224114 logs.go:123] Gathering logs for kube-controller-manager [915b5ab76424f0e967e79964a7369a96b5e960e3c50f67c986abc539bf2b3e3d] ...
	I1217 00:41:11.770579  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 915b5ab76424f0e967e79964a7369a96b5e960e3c50f67c986abc539bf2b3e3d"
	I1217 00:41:11.796152  224114 logs.go:123] Gathering logs for kube-controller-manager [dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829] ...
	I1217 00:41:11.796179  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc75709a155a2a5401b46d5ddac2a265bc12181fadc33c0e4f267d116f4d6829"
	I1217 00:41:11.823059  224114 logs.go:123] Gathering logs for container status ...
	I1217 00:41:11.823086  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:41:11.853768  224114 logs.go:123] Gathering logs for dmesg ...
	I1217 00:41:11.853799  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:41:11.867314  224114 logs.go:123] Gathering logs for kube-scheduler [935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a] ...
	I1217 00:41:11.867343  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 935bec7f90de32d67ca8b89cce2e3566052a6687fa0bbbcda96111a31ae6fc6a"
	I1217 00:41:11.892420  224114 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:41:11.892445  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:41:11.952716  224114 logs.go:123] Gathering logs for kubelet ...
	I1217 00:41:11.952747  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:41:12.044583  224114 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:41:12.044617  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:41:12.100940  224114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:41:12.100961  224114 logs.go:123] Gathering logs for kube-apiserver [a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22] ...
	I1217 00:41:12.100977  224114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2ff81b20789f24abdadda8b00c85dbe176cb1f647a6ae57405ea118d299cc22"
	I1217 00:41:11.182198  211439 out.go:252]   - Generating certificates and keys ...
	I1217 00:41:11.182283  211439 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:41:11.182383  211439 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:41:11.182502  211439 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 00:41:11.182595  211439 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 00:41:11.182678  211439 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 00:41:11.182736  211439 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 00:41:11.182848  211439 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 00:41:11.183023  211439 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 00:41:11.183145  211439 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 00:41:11.183248  211439 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 00:41:11.183314  211439 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 00:41:11.183371  211439 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:41:11.291384  211439 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:41:11.386354  211439 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:41:11.591302  211439 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:41:11.943091  211439 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:41:12.231195  211439 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:41:12.231706  211439 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:41:12.233905  211439 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:41:12.235442  211439 out.go:252]   - Booting up control plane ...
	I1217 00:41:12.235566  211439 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:41:12.235675  211439 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:41:12.236340  211439 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:41:12.246568  211439 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:41:12.251974  211439 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:41:12.252052  211439 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:41:12.331770  211439 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:41:12.331876  211439 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:41:13.333246  211439 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001517597s
	I1217 00:41:13.333349  211439 kubeadm.go:319] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1217 00:41:16.335610  211439 kubeadm.go:319] [api-check] The API server is healthy after 3.002389594s
	I1217 00:41:16.347610  211439 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 00:41:16.358767  211439 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 00:41:16.378454  211439 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 00:41:16.378764  211439 kubeadm.go:319] [mark-control-plane] Marking the node stopped-upgrade-028618 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 00:41:16.385750  211439 kubeadm.go:319] [bootstrap-token] Using token: srsabv.qohj3cih1jbc41yg
	
	
	==> CRI-O <==
	Dec 17 00:41:06 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:06.265312392Z" level=info msg="Starting container: bf70c6491f400e98f877d5b6381fc0cb4cfa2fe15f38975587ae1bc2450b2d94" id=13ca9c62-05db-4bbf-bae3-d89cbfa911f8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:41:06 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:06.267426479Z" level=info msg="Started container" PID=2158 containerID=bf70c6491f400e98f877d5b6381fc0cb4cfa2fe15f38975587ae1bc2450b2d94 description=kube-system/coredns-5dd5756b68-zsfnr/coredns id=13ca9c62-05db-4bbf-bae3-d89cbfa911f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5835e2ab651e4635aed3d11bf849c9a1142265c835bbddd42ebda685715ad8f0
	Dec 17 00:41:09 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:09.148242239Z" level=info msg="Running pod sandbox: default/busybox/POD" id=5ddab265-984a-40fd-b403-446f1968ef1e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:41:09 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:09.148312647Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:41:09 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:09.152799829Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2264c5c00a42bf160d5556f8fc7ad1b7baf09b139509319b98a2f1f81af92f8d UID:80cfe0a3-1fd0-46e2-90ad-14f7c908c862 NetNS:/var/run/netns/caaab28c-8869-4d28-bea2-86d4f817e078 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b3b0}] Aliases:map[]}"
	Dec 17 00:41:09 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:09.152825235Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 17 00:41:09 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:09.161214877Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2264c5c00a42bf160d5556f8fc7ad1b7baf09b139509319b98a2f1f81af92f8d UID:80cfe0a3-1fd0-46e2-90ad-14f7c908c862 NetNS:/var/run/netns/caaab28c-8869-4d28-bea2-86d4f817e078 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b3b0}] Aliases:map[]}"
	Dec 17 00:41:09 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:09.16135489Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 17 00:41:09 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:09.162039539Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 00:41:09 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:09.162850205Z" level=info msg="Ran pod sandbox 2264c5c00a42bf160d5556f8fc7ad1b7baf09b139509319b98a2f1f81af92f8d with infra container: default/busybox/POD" id=5ddab265-984a-40fd-b403-446f1968ef1e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:41:09 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:09.163896192Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1dfb326c-399e-4f54-ab3b-687446c2594d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:41:09 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:09.164063518Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=1dfb326c-399e-4f54-ab3b-687446c2594d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:41:09 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:09.164108628Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=1dfb326c-399e-4f54-ab3b-687446c2594d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:41:09 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:09.164607712Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=eea885f9-916a-4994-91c1-f3edf34b6f12 name=/runtime.v1.ImageService/PullImage
	Dec 17 00:41:09 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:09.168120957Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 17 00:41:09 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:09.79582193Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=eea885f9-916a-4994-91c1-f3edf34b6f12 name=/runtime.v1.ImageService/PullImage
	Dec 17 00:41:09 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:09.796800743Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=51d2f1e2-76e0-4ee7-aad9-f631bac4cb6a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:41:09 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:09.798663769Z" level=info msg="Creating container: default/busybox/busybox" id=9ad5e301-0003-4041-9ee9-afacc8505bb4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:41:09 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:09.798799602Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:41:09 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:09.802335037Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:41:09 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:09.802798949Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:41:09 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:09.827623001Z" level=info msg="Created container d79587d2db353575fb73697a3828ac61a9ed79b4b09485f16eaff1a062373120: default/busybox/busybox" id=9ad5e301-0003-4041-9ee9-afacc8505bb4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:41:09 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:09.828213457Z" level=info msg="Starting container: d79587d2db353575fb73697a3828ac61a9ed79b4b09485f16eaff1a062373120" id=99ca7e44-9c17-423a-a1ba-7a6a10f1de50 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:41:09 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:09.829825387Z" level=info msg="Started container" PID=2234 containerID=d79587d2db353575fb73697a3828ac61a9ed79b4b09485f16eaff1a062373120 description=default/busybox/busybox id=99ca7e44-9c17-423a-a1ba-7a6a10f1de50 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2264c5c00a42bf160d5556f8fc7ad1b7baf09b139509319b98a2f1f81af92f8d
	Dec 17 00:41:15 old-k8s-version-742860 crio[774]: time="2025-12-17T00:41:15.939099783Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	d79587d2db353       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   2264c5c00a42b       busybox                                          default
	bf70c6491f400       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      11 seconds ago      Running             coredns                   0                   5835e2ab651e4       coredns-5dd5756b68-zsfnr                         kube-system
	f68693f221b28       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   d8f5d9bee6b71       storage-provisioner                              kube-system
	6fe3928052a58       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    21 seconds ago      Running             kindnet-cni               0                   ba50e13df716f       kindnet-9sklv                                    kube-system
	589c3a5b2f57e       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      23 seconds ago      Running             kube-proxy                0                   119548c848490       kube-proxy-ltxr5                                 kube-system
	bf24a8dca6bd5       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      41 seconds ago      Running             kube-scheduler            0                   bf4a4020b43f6       kube-scheduler-old-k8s-version-742860            kube-system
	da55bd07a2803       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      41 seconds ago      Running             etcd                      0                   bc1e6db76b661       etcd-old-k8s-version-742860                      kube-system
	361788b5e4546       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      41 seconds ago      Running             kube-apiserver            0                   cb1137edfd953       kube-apiserver-old-k8s-version-742860            kube-system
	1e159993da5ed       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      41 seconds ago      Running             kube-controller-manager   0                   9d5f92b214c09       kube-controller-manager-old-k8s-version-742860   kube-system
	
	
	==> coredns [bf70c6491f400e98f877d5b6381fc0cb4cfa2fe15f38975587ae1bc2450b2d94] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54716 - 21383 "HINFO IN 1364551697407561881.6580499907816380177. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.106385889s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-742860
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-742860
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=old-k8s-version-742860
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_40_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:40:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-742860
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:41:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:41:10 +0000   Wed, 17 Dec 2025 00:40:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:41:10 +0000   Wed, 17 Dec 2025 00:40:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:41:10 +0000   Wed, 17 Dec 2025 00:40:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 00:41:10 +0000   Wed, 17 Dec 2025 00:41:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-742860
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                adda18ce-4c65-4338-86bc-e27f9ae5140e
	  Boot ID:                    0e9cedc6-c46e-4354-b3d2-9272a8b33ae5
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-5dd5756b68-zsfnr                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-old-k8s-version-742860                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kindnet-9sklv                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-old-k8s-version-742860             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-742860    200m (2%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-ltxr5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-old-k8s-version-742860             100m (1%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 38s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s   kubelet          Node old-k8s-version-742860 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s   kubelet          Node old-k8s-version-742860 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s   kubelet          Node old-k8s-version-742860 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node old-k8s-version-742860 event: Registered Node old-k8s-version-742860 in Controller
	  Normal  NodeReady                12s   kubelet          Node old-k8s-version-742860 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.089382] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024236] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.864694] kauditd_printk_skb: 47 callbacks suppressed
	[Dec17 00:07] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.006904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +2.048755] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +4.030595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +8.447143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[ +16.382404] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000015] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[Dec17 00:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	
	
	==> etcd [da55bd07a280362516255d25995cb9bca3dfaa74eebbecc18f105b1f626e3fce] <==
	{"level":"info","ts":"2025-12-17T00:40:35.482389Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-17T00:40:35.482779Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-17T00:40:35.483959Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-17T00:40:35.484247Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-17T00:40:35.48429Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-17T00:40:35.484403Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-17T00:40:35.484419Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-17T00:40:35.67317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-17T00:40:35.67322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-17T00:40:35.673258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2025-12-17T00:40:35.673292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-12-17T00:40:35.673317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-17T00:40:35.673342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-12-17T00:40:35.673356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-17T00:40:35.674217Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-742860 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T00:40:35.674219Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T00:40:35.674233Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-17T00:40:35.674251Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T00:40:35.674444Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T00:40:35.674466Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T00:40:35.674788Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-17T00:40:35.674894Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-17T00:40:35.674922Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-17T00:40:35.67565Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-17T00:40:35.675696Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 00:41:17 up  1:23,  0 user,  load average: 3.60, 2.59, 1.75
	Linux old-k8s-version-742860 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6fe3928052a58e188a93d7d5bc2a9ccec38bf5d87ea80425708b8a98df964cac] <==
	I1217 00:40:55.568086       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 00:40:55.568347       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1217 00:40:55.568471       1 main.go:148] setting mtu 1500 for CNI 
	I1217 00:40:55.568487       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 00:40:55.568508       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T00:40:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 00:40:55.769156       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 00:40:55.769218       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 00:40:55.769230       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 00:40:55.770294       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 00:40:56.169556       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 00:40:56.169580       1 metrics.go:72] Registering metrics
	I1217 00:40:56.169674       1 controller.go:711] "Syncing nftables rules"
	I1217 00:41:05.777123       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 00:41:05.777179       1 main.go:301] handling current node
	I1217 00:41:15.772144       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 00:41:15.772183       1 main.go:301] handling current node
	
	
	==> kube-apiserver [361788b5e4546bb735be1b39c8aa6f60cdc65f926436c4e50eb4b8a00cb02899] <==
	I1217 00:40:36.902868       1 aggregator.go:166] initial CRD sync complete...
	I1217 00:40:36.902876       1 autoregister_controller.go:141] Starting autoregister controller
	I1217 00:40:36.902883       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 00:40:36.902890       1 cache.go:39] Caches are synced for autoregister controller
	I1217 00:40:36.903072       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1217 00:40:36.903380       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1217 00:40:36.903402       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1217 00:40:36.908296       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 00:40:37.090851       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 00:40:37.806217       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1217 00:40:37.809805       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1217 00:40:37.809831       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 00:40:38.160581       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 00:40:38.195344       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 00:40:38.312300       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 00:40:38.317916       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1217 00:40:38.318897       1 controller.go:624] quota admission added evaluator for: endpoints
	I1217 00:40:38.322618       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 00:40:38.842074       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1217 00:40:39.501523       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1217 00:40:39.510511       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 00:40:39.520077       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1217 00:40:53.177588       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1217 00:40:53.177588       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1217 00:40:53.232297       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [1e159993da5ede41661b63c336db12da4e52728a32c1cc5d98114a8e7ed52cf4] <==
	I1217 00:40:52.574768       1 shared_informer.go:318] Caches are synced for cronjob
	I1217 00:40:52.623398       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1217 00:40:52.650273       1 shared_informer.go:318] Caches are synced for resource quota
	I1217 00:40:52.656543       1 shared_informer.go:318] Caches are synced for HPA
	I1217 00:40:52.677176       1 shared_informer.go:318] Caches are synced for resource quota
	I1217 00:40:52.988697       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 00:40:53.028941       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 00:40:53.028981       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1217 00:40:53.191428       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ltxr5"
	I1217 00:40:53.193255       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-9sklv"
	I1217 00:40:53.236119       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1217 00:40:53.452801       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1217 00:40:53.480670       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-zsfnr"
	I1217 00:40:53.485917       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-rmbtf"
	I1217 00:40:53.497562       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="261.69232ms"
	I1217 00:40:53.505723       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-rmbtf"
	I1217 00:40:53.514366       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.751312ms"
	I1217 00:40:53.524230       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.81013ms"
	I1217 00:40:53.524345       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.349µs"
	I1217 00:40:53.524430       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.161µs"
	I1217 00:41:05.919973       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="138.459µs"
	I1217 00:41:05.931404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="172.862µs"
	I1217 00:41:06.664820       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.275418ms"
	I1217 00:41:06.664939       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.64µs"
	I1217 00:41:07.423945       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [589c3a5b2f57eff4fbf6c0f697597ab1230ee0b0533cb13792215307536916ae] <==
	I1217 00:40:53.596356       1 server_others.go:69] "Using iptables proxy"
	I1217 00:40:53.606088       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1217 00:40:53.632305       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 00:40:53.635170       1 server_others.go:152] "Using iptables Proxier"
	I1217 00:40:53.635200       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1217 00:40:53.635208       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1217 00:40:53.635232       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1217 00:40:53.635436       1 server.go:846] "Version info" version="v1.28.0"
	I1217 00:40:53.635448       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:40:53.637478       1 config.go:97] "Starting endpoint slice config controller"
	I1217 00:40:53.637515       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1217 00:40:53.637518       1 config.go:315] "Starting node config controller"
	I1217 00:40:53.637575       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1217 00:40:53.638108       1 config.go:188] "Starting service config controller"
	I1217 00:40:53.638129       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1217 00:40:53.737947       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1217 00:40:53.737951       1 shared_informer.go:318] Caches are synced for node config
	I1217 00:40:53.739055       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [bf24a8dca6bd563adbdfa903ed827d42c3feab4b96d2b7ad07aac2d9f97457c0] <==
	W1217 00:40:36.855444       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1217 00:40:36.855463       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1217 00:40:36.855489       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1217 00:40:36.855506       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1217 00:40:36.855267       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1217 00:40:36.855527       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1217 00:40:36.855543       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1217 00:40:36.855557       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1217 00:40:36.855514       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1217 00:40:36.855589       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1217 00:40:37.729924       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1217 00:40:37.729960       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1217 00:40:37.736277       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1217 00:40:37.736303       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1217 00:40:37.772738       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1217 00:40:37.772762       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1217 00:40:37.779954       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1217 00:40:37.779982       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1217 00:40:37.825289       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1217 00:40:37.825331       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1217 00:40:37.840761       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1217 00:40:37.840802       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 00:40:37.970330       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1217 00:40:37.970367       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1217 00:40:39.853019       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 17 00:40:52 old-k8s-version-742860 kubelet[1388]: I1217 00:40:52.530494    1388 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 17 00:40:53 old-k8s-version-742860 kubelet[1388]: I1217 00:40:53.198972    1388 topology_manager.go:215] "Topology Admit Handler" podUID="4cc26e30-3dbe-46b3-ad66-547936b92c1e" podNamespace="kube-system" podName="kube-proxy-ltxr5"
	Dec 17 00:40:53 old-k8s-version-742860 kubelet[1388]: I1217 00:40:53.202767    1388 topology_manager.go:215] "Topology Admit Handler" podUID="f0711ffd-97e7-4981-8eb2-ae13de35c604" podNamespace="kube-system" podName="kindnet-9sklv"
	Dec 17 00:40:53 old-k8s-version-742860 kubelet[1388]: I1217 00:40:53.262744    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzmxd\" (UniqueName: \"kubernetes.io/projected/4cc26e30-3dbe-46b3-ad66-547936b92c1e-kube-api-access-lzmxd\") pod \"kube-proxy-ltxr5\" (UID: \"4cc26e30-3dbe-46b3-ad66-547936b92c1e\") " pod="kube-system/kube-proxy-ltxr5"
	Dec 17 00:40:53 old-k8s-version-742860 kubelet[1388]: I1217 00:40:53.262926    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cc26e30-3dbe-46b3-ad66-547936b92c1e-lib-modules\") pod \"kube-proxy-ltxr5\" (UID: \"4cc26e30-3dbe-46b3-ad66-547936b92c1e\") " pod="kube-system/kube-proxy-ltxr5"
	Dec 17 00:40:53 old-k8s-version-742860 kubelet[1388]: I1217 00:40:53.262960    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4cc26e30-3dbe-46b3-ad66-547936b92c1e-kube-proxy\") pod \"kube-proxy-ltxr5\" (UID: \"4cc26e30-3dbe-46b3-ad66-547936b92c1e\") " pod="kube-system/kube-proxy-ltxr5"
	Dec 17 00:40:53 old-k8s-version-742860 kubelet[1388]: I1217 00:40:53.262979    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cc26e30-3dbe-46b3-ad66-547936b92c1e-xtables-lock\") pod \"kube-proxy-ltxr5\" (UID: \"4cc26e30-3dbe-46b3-ad66-547936b92c1e\") " pod="kube-system/kube-proxy-ltxr5"
	Dec 17 00:40:53 old-k8s-version-742860 kubelet[1388]: I1217 00:40:53.263015    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f0711ffd-97e7-4981-8eb2-ae13de35c604-cni-cfg\") pod \"kindnet-9sklv\" (UID: \"f0711ffd-97e7-4981-8eb2-ae13de35c604\") " pod="kube-system/kindnet-9sklv"
	Dec 17 00:40:53 old-k8s-version-742860 kubelet[1388]: I1217 00:40:53.263144    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0711ffd-97e7-4981-8eb2-ae13de35c604-lib-modules\") pod \"kindnet-9sklv\" (UID: \"f0711ffd-97e7-4981-8eb2-ae13de35c604\") " pod="kube-system/kindnet-9sklv"
	Dec 17 00:40:53 old-k8s-version-742860 kubelet[1388]: I1217 00:40:53.263202    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cfcb\" (UniqueName: \"kubernetes.io/projected/f0711ffd-97e7-4981-8eb2-ae13de35c604-kube-api-access-7cfcb\") pod \"kindnet-9sklv\" (UID: \"f0711ffd-97e7-4981-8eb2-ae13de35c604\") " pod="kube-system/kindnet-9sklv"
	Dec 17 00:40:53 old-k8s-version-742860 kubelet[1388]: I1217 00:40:53.263247    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0711ffd-97e7-4981-8eb2-ae13de35c604-xtables-lock\") pod \"kindnet-9sklv\" (UID: \"f0711ffd-97e7-4981-8eb2-ae13de35c604\") " pod="kube-system/kindnet-9sklv"
	Dec 17 00:40:53 old-k8s-version-742860 kubelet[1388]: I1217 00:40:53.625556    1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ltxr5" podStartSLOduration=0.625506199 podCreationTimestamp="2025-12-17 00:40:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:40:53.625422185 +0000 UTC m=+14.145200035" watchObservedRunningTime="2025-12-17 00:40:53.625506199 +0000 UTC m=+14.145284051"
	Dec 17 00:40:55 old-k8s-version-742860 kubelet[1388]: I1217 00:40:55.629060    1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-9sklv" podStartSLOduration=0.781487798 podCreationTimestamp="2025-12-17 00:40:53 +0000 UTC" firstStartedPulling="2025-12-17 00:40:53.518043334 +0000 UTC m=+14.037821178" lastFinishedPulling="2025-12-17 00:40:55.365540523 +0000 UTC m=+15.885318375" observedRunningTime="2025-12-17 00:40:55.628865105 +0000 UTC m=+16.148642969" watchObservedRunningTime="2025-12-17 00:40:55.628984995 +0000 UTC m=+16.148762845"
	Dec 17 00:41:05 old-k8s-version-742860 kubelet[1388]: I1217 00:41:05.897796    1388 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 17 00:41:05 old-k8s-version-742860 kubelet[1388]: I1217 00:41:05.920172    1388 topology_manager.go:215] "Topology Admit Handler" podUID="081004df-4dc4-442c-9c8a-0bb2ea2f3e06" podNamespace="kube-system" podName="coredns-5dd5756b68-zsfnr"
	Dec 17 00:41:05 old-k8s-version-742860 kubelet[1388]: I1217 00:41:05.921396    1388 topology_manager.go:215] "Topology Admit Handler" podUID="69871fec-dde0-4ea2-9293-5aa6b43dd313" podNamespace="kube-system" podName="storage-provisioner"
	Dec 17 00:41:05 old-k8s-version-742860 kubelet[1388]: I1217 00:41:05.960088    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/081004df-4dc4-442c-9c8a-0bb2ea2f3e06-config-volume\") pod \"coredns-5dd5756b68-zsfnr\" (UID: \"081004df-4dc4-442c-9c8a-0bb2ea2f3e06\") " pod="kube-system/coredns-5dd5756b68-zsfnr"
	Dec 17 00:41:05 old-k8s-version-742860 kubelet[1388]: I1217 00:41:05.960153    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/69871fec-dde0-4ea2-9293-5aa6b43dd313-tmp\") pod \"storage-provisioner\" (UID: \"69871fec-dde0-4ea2-9293-5aa6b43dd313\") " pod="kube-system/storage-provisioner"
	Dec 17 00:41:05 old-k8s-version-742860 kubelet[1388]: I1217 00:41:05.960185    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s4vg\" (UniqueName: \"kubernetes.io/projected/69871fec-dde0-4ea2-9293-5aa6b43dd313-kube-api-access-5s4vg\") pod \"storage-provisioner\" (UID: \"69871fec-dde0-4ea2-9293-5aa6b43dd313\") " pod="kube-system/storage-provisioner"
	Dec 17 00:41:05 old-k8s-version-742860 kubelet[1388]: I1217 00:41:05.960369    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z76k\" (UniqueName: \"kubernetes.io/projected/081004df-4dc4-442c-9c8a-0bb2ea2f3e06-kube-api-access-9z76k\") pod \"coredns-5dd5756b68-zsfnr\" (UID: \"081004df-4dc4-442c-9c8a-0bb2ea2f3e06\") " pod="kube-system/coredns-5dd5756b68-zsfnr"
	Dec 17 00:41:06 old-k8s-version-742860 kubelet[1388]: I1217 00:41:06.649849    1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.649797228 podCreationTimestamp="2025-12-17 00:40:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:41:06.649557314 +0000 UTC m=+27.169335164" watchObservedRunningTime="2025-12-17 00:41:06.649797228 +0000 UTC m=+27.169575125"
	Dec 17 00:41:06 old-k8s-version-742860 kubelet[1388]: I1217 00:41:06.658299    1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-zsfnr" podStartSLOduration=13.658255623 podCreationTimestamp="2025-12-17 00:40:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:41:06.658224868 +0000 UTC m=+27.178002718" watchObservedRunningTime="2025-12-17 00:41:06.658255623 +0000 UTC m=+27.178033472"
	Dec 17 00:41:08 old-k8s-version-742860 kubelet[1388]: I1217 00:41:08.846474    1388 topology_manager.go:215] "Topology Admit Handler" podUID="80cfe0a3-1fd0-46e2-90ad-14f7c908c862" podNamespace="default" podName="busybox"
	Dec 17 00:41:08 old-k8s-version-742860 kubelet[1388]: I1217 00:41:08.877973    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85lqq\" (UniqueName: \"kubernetes.io/projected/80cfe0a3-1fd0-46e2-90ad-14f7c908c862-kube-api-access-85lqq\") pod \"busybox\" (UID: \"80cfe0a3-1fd0-46e2-90ad-14f7c908c862\") " pod="default/busybox"
	Dec 17 00:41:10 old-k8s-version-742860 kubelet[1388]: I1217 00:41:10.660041    1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.028089541 podCreationTimestamp="2025-12-17 00:41:08 +0000 UTC" firstStartedPulling="2025-12-17 00:41:09.164282169 +0000 UTC m=+29.684060011" lastFinishedPulling="2025-12-17 00:41:09.796158439 +0000 UTC m=+30.315936281" observedRunningTime="2025-12-17 00:41:10.659751518 +0000 UTC m=+31.179529368" watchObservedRunningTime="2025-12-17 00:41:10.659965811 +0000 UTC m=+31.179743662"
	
	
	==> storage-provisioner [f68693f221b282f0825d178704e163ee701bf21677a95ccf4311fc1fb25c52a3] <==
	I1217 00:41:06.278348       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 00:41:06.286687       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 00:41:06.286735       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1217 00:41:06.293341       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 00:41:06.293494       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f96f46b2-0bc0-44bd-93ae-70942e078e0e", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-742860_e7dc3fe0-bee7-4e3a-81a3-239590e04909 became leader
	I1217 00:41:06.293514       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-742860_e7dc3fe0-bee7-4e3a-81a3-239590e04909!
	I1217 00:41:06.394287       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-742860_e7dc3fe0-bee7-4e3a-81a3-239590e04909!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-742860 -n old-k8s-version-742860
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-742860 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-864613 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-864613 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (274.67929ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:42:14Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-864613 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-864613 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-864613 describe deploy/metrics-server -n kube-system: exit status 1 (75.100634ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-864613 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-864613
helpers_test.go:244: (dbg) docker inspect no-preload-864613:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d31578a000b6bc0fd7f6db18dfc484bf6d5c523079339ecebac6aa5e2a0209d9",
	        "Created": "2025-12-17T00:41:22.987777185Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 271639,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:41:23.017974648Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/d31578a000b6bc0fd7f6db18dfc484bf6d5c523079339ecebac6aa5e2a0209d9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d31578a000b6bc0fd7f6db18dfc484bf6d5c523079339ecebac6aa5e2a0209d9/hostname",
	        "HostsPath": "/var/lib/docker/containers/d31578a000b6bc0fd7f6db18dfc484bf6d5c523079339ecebac6aa5e2a0209d9/hosts",
	        "LogPath": "/var/lib/docker/containers/d31578a000b6bc0fd7f6db18dfc484bf6d5c523079339ecebac6aa5e2a0209d9/d31578a000b6bc0fd7f6db18dfc484bf6d5c523079339ecebac6aa5e2a0209d9-json.log",
	        "Name": "/no-preload-864613",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-864613:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-864613",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d31578a000b6bc0fd7f6db18dfc484bf6d5c523079339ecebac6aa5e2a0209d9",
	                "LowerDir": "/var/lib/docker/overlay2/f190c06e656d738f85b08c978b5e137744361ddd53ad1e7f79ae34378398bcd5-init/diff:/var/lib/docker/overlay2/594b812fd6d8db89dab322ea9e00d43dd555e9709fb5e6953e3873cce717392c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f190c06e656d738f85b08c978b5e137744361ddd53ad1e7f79ae34378398bcd5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f190c06e656d738f85b08c978b5e137744361ddd53ad1e7f79ae34378398bcd5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f190c06e656d738f85b08c978b5e137744361ddd53ad1e7f79ae34378398bcd5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-864613",
	                "Source": "/var/lib/docker/volumes/no-preload-864613/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-864613",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-864613",
	                "name.minikube.sigs.k8s.io": "no-preload-864613",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c1d94f6224cc0e70106665796b4b733c40fd5847446cc929d87cc35a218a1f1a",
	            "SandboxKey": "/var/run/docker/netns/c1d94f6224cc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-864613": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f576aec2f4916437744d456261513e7c90cb52cd053227c69a0accdc704e8654",
	                    "EndpointID": "da3726a845a246320e1d48208a69ab0b295fc6ed09d5e29e4a94ac103b86510e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "12:13:d9:b8:3c:84",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-864613",
	                        "d31578a000b6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-864613 -n no-preload-864613
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-864613 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-864613 logs -n 25: (1.061225641s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p NoKubernetes-375259 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-375259          │ jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │                     │
	│ delete  │ -p NoKubernetes-375259                                                                                                                                                                                                                        │ NoKubernetes-375259          │ jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ start   │ -p force-systemd-flag-452634 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-452634    │ jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ ssh     │ force-systemd-flag-452634 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-452634    │ jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ delete  │ -p force-systemd-flag-452634                                                                                                                                                                                                                  │ force-systemd-flag-452634    │ jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ start   │ -p cert-options-636512 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-636512          │ jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh     │ cert-options-636512 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-636512          │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh     │ -p cert-options-636512 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-636512          │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ delete  │ -p cert-options-636512                                                                                                                                                                                                                        │ cert-options-636512          │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ start   │ -p old-k8s-version-742860 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:41 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-742860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │                     │
	│ stop    │ -p old-k8s-version-742860 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:41 UTC │
	│ delete  │ -p stopped-upgrade-028618                                                                                                                                                                                                                     │ stopped-upgrade-028618       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:41 UTC │
	│ start   │ -p no-preload-864613 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-742860 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:41 UTC │
	│ start   │ -p old-k8s-version-742860 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │                     │
	│ start   │ -p cert-expiration-753607 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-753607       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:41 UTC │
	│ delete  │ -p cert-expiration-753607                                                                                                                                                                                                                     │ cert-expiration-753607       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p embed-certs-153232 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ start   │ -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-803959    │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ start   │ -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-803959    │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ delete  │ -p kubernetes-upgrade-803959                                                                                                                                                                                                                  │ kubernetes-upgrade-803959    │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ delete  │ -p disable-driver-mounts-827138                                                                                                                                                                                                               │ disable-driver-mounts-827138 │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p default-k8s-diff-port-414413 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-864613 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:42:13
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:42:13.425350  284412 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:42:13.425613  284412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:42:13.425622  284412 out.go:374] Setting ErrFile to fd 2...
	I1217 00:42:13.425627  284412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:42:13.425810  284412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:42:13.426400  284412 out.go:368] Setting JSON to false
	I1217 00:42:13.427820  284412 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5083,"bootTime":1765927050,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:42:13.427877  284412 start.go:143] virtualization: kvm guest
	I1217 00:42:13.429987  284412 out.go:179] * [default-k8s-diff-port-414413] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:42:13.431792  284412 notify.go:221] Checking for updates...
	I1217 00:42:13.431812  284412 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:42:13.433108  284412 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:42:13.434449  284412 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:42:13.435642  284412 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:42:12.992537  280822 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:42:13.007319  280822 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:42:13.044148  280822 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:42:13.044207  280822 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:13.054374  280822 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:42:13.054427  280822 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:13.063502  280822 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:13.073556  280822 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:13.082334  280822 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:42:13.090303  280822 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:13.098633  280822 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:13.111142  280822 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:13.119434  280822 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:42:13.126679  280822 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:42:13.134165  280822 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:13.215480  280822 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:42:13.345776  280822 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:42:13.345849  280822 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:42:13.349830  280822 start.go:564] Will wait 60s for crictl version
	I1217 00:42:13.349869  280822 ssh_runner.go:195] Run: which crictl
	I1217 00:42:13.353343  280822 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:42:13.379609  280822 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:42:13.379686  280822 ssh_runner.go:195] Run: crio --version
	I1217 00:42:13.408383  280822 ssh_runner.go:195] Run: crio --version
	I1217 00:42:13.438042  284412 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:42:13.438044  280822 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 00:42:13.439195  284412 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:42:13.440710  284412 config.go:182] Loaded profile config "embed-certs-153232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:42:13.440811  284412 config.go:182] Loaded profile config "no-preload-864613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:42:13.440892  284412 config.go:182] Loaded profile config "old-k8s-version-742860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 00:42:13.440976  284412 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:42:13.467911  284412 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:42:13.468056  284412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:42:13.531873  284412 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-17 00:42:13.522132291 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:42:13.531977  284412 docker.go:319] overlay module found
	I1217 00:42:13.535499  284412 out.go:179] * Using the docker driver based on user configuration
	I1217 00:42:13.536777  284412 start.go:309] selected driver: docker
	I1217 00:42:13.536790  284412 start.go:927] validating driver "docker" against <nil>
	I1217 00:42:13.536803  284412 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:42:13.537387  284412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:42:13.593933  284412 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-17 00:42:13.584322089 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:42:13.594200  284412 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:42:13.594521  284412 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:42:13.596131  284412 out.go:179] * Using Docker driver with root privileges
	I1217 00:42:13.597366  284412 cni.go:84] Creating CNI manager for ""
	I1217 00:42:13.597447  284412 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:42:13.597463  284412 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 00:42:13.597538  284412 start.go:353] cluster config:
	{Name:default-k8s-diff-port-414413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-414413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:42:13.599069  284412 out.go:179] * Starting "default-k8s-diff-port-414413" primary control-plane node in "default-k8s-diff-port-414413" cluster
	I1217 00:42:13.600353  284412 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:42:13.601785  284412 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:42:13.603055  284412 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:42:13.603094  284412 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:42:13.603099  284412 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1217 00:42:13.603137  284412 cache.go:65] Caching tarball of preloaded images
	I1217 00:42:13.603233  284412 preload.go:238] Found /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 00:42:13.603249  284412 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 00:42:13.603342  284412 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/config.json ...
	I1217 00:42:13.603366  284412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/config.json: {Name:mka6d4da213f94b51ee6aa8917291169616177e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:13.624885  284412 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:42:13.624903  284412 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:42:13.624919  284412 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:42:13.624956  284412 start.go:360] acquireMachinesLock for default-k8s-diff-port-414413: {Name:mke046f6be338ac4ae580049fea3f7de7f7546a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:42:13.625073  284412 start.go:364] duration metric: took 97.859µs to acquireMachinesLock for "default-k8s-diff-port-414413"
	I1217 00:42:13.625114  284412 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-414413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-414413 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:42:13.625172  284412 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 17 00:42:04 no-preload-864613 crio[770]: time="2025-12-17T00:42:04.111634592Z" level=info msg="Starting container: 31b83acc24eae8c0e7dbd0d8e7b27f705e1244391d06c37b0e65e72cec84eec9" id=5b6dd56c-f4bd-4fcf-a1c3-48c503235749 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:42:04 no-preload-864613 crio[770]: time="2025-12-17T00:42:04.115606953Z" level=info msg="Started container" PID=2860 containerID=31b83acc24eae8c0e7dbd0d8e7b27f705e1244391d06c37b0e65e72cec84eec9 description=kube-system/storage-provisioner/storage-provisioner id=5b6dd56c-f4bd-4fcf-a1c3-48c503235749 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ef4f7486b409db7bf7b1830fd318f1353161c4a9ec966ad5a769a951f82445a4
	Dec 17 00:42:07 no-preload-864613 crio[770]: time="2025-12-17T00:42:07.756193543Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4a2eb5d6-a233-4feb-85d2-6154391725e9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:42:07 no-preload-864613 crio[770]: time="2025-12-17T00:42:07.75628528Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:07 no-preload-864613 crio[770]: time="2025-12-17T00:42:07.762303787Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:324e04e99d251e8d63b3eb33203d7aab13d4006f9f4735222716506538f65d0c UID:a45ae093-ee01-4707-9a4c-570ad3b0770c NetNS:/var/run/netns/14a620b1-1b1d-4180-9704-54b910a8d19b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000804780}] Aliases:map[]}"
	Dec 17 00:42:07 no-preload-864613 crio[770]: time="2025-12-17T00:42:07.762340527Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 17 00:42:07 no-preload-864613 crio[770]: time="2025-12-17T00:42:07.773391379Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:324e04e99d251e8d63b3eb33203d7aab13d4006f9f4735222716506538f65d0c UID:a45ae093-ee01-4707-9a4c-570ad3b0770c NetNS:/var/run/netns/14a620b1-1b1d-4180-9704-54b910a8d19b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000804780}] Aliases:map[]}"
	Dec 17 00:42:07 no-preload-864613 crio[770]: time="2025-12-17T00:42:07.773558006Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 17 00:42:07 no-preload-864613 crio[770]: time="2025-12-17T00:42:07.774327869Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 00:42:07 no-preload-864613 crio[770]: time="2025-12-17T00:42:07.775236293Z" level=info msg="Ran pod sandbox 324e04e99d251e8d63b3eb33203d7aab13d4006f9f4735222716506538f65d0c with infra container: default/busybox/POD" id=4a2eb5d6-a233-4feb-85d2-6154391725e9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:42:07 no-preload-864613 crio[770]: time="2025-12-17T00:42:07.776586113Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e43ef5c1-6e2a-4eac-abd9-56aaa987a6e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:42:07 no-preload-864613 crio[770]: time="2025-12-17T00:42:07.776746278Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e43ef5c1-6e2a-4eac-abd9-56aaa987a6e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:42:07 no-preload-864613 crio[770]: time="2025-12-17T00:42:07.776799945Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e43ef5c1-6e2a-4eac-abd9-56aaa987a6e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:42:07 no-preload-864613 crio[770]: time="2025-12-17T00:42:07.777488092Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=00dedaef-9098-4614-b832-624edde1b4d3 name=/runtime.v1.ImageService/PullImage
	Dec 17 00:42:07 no-preload-864613 crio[770]: time="2025-12-17T00:42:07.779691984Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 17 00:42:08 no-preload-864613 crio[770]: time="2025-12-17T00:42:08.417555115Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=00dedaef-9098-4614-b832-624edde1b4d3 name=/runtime.v1.ImageService/PullImage
	Dec 17 00:42:08 no-preload-864613 crio[770]: time="2025-12-17T00:42:08.418162582Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7af278dd-a935-4143-8d32-2f858e46e72c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:42:08 no-preload-864613 crio[770]: time="2025-12-17T00:42:08.419980295Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dfa6f242-2c4d-4466-b26d-1525dbc3e6cf name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:42:08 no-preload-864613 crio[770]: time="2025-12-17T00:42:08.423402709Z" level=info msg="Creating container: default/busybox/busybox" id=1f07f86a-f575-4ce1-b0b4-245503f99786 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:42:08 no-preload-864613 crio[770]: time="2025-12-17T00:42:08.423551854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:08 no-preload-864613 crio[770]: time="2025-12-17T00:42:08.43204804Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:08 no-preload-864613 crio[770]: time="2025-12-17T00:42:08.433746652Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:08 no-preload-864613 crio[770]: time="2025-12-17T00:42:08.460025476Z" level=info msg="Created container c67ad1c7514547ea1429296e63ec746496fa46ed0312d2eb56c26b3d708320f3: default/busybox/busybox" id=1f07f86a-f575-4ce1-b0b4-245503f99786 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:42:08 no-preload-864613 crio[770]: time="2025-12-17T00:42:08.460652052Z" level=info msg="Starting container: c67ad1c7514547ea1429296e63ec746496fa46ed0312d2eb56c26b3d708320f3" id=26fc566b-1d7d-487d-b209-c0f58d3c23ba name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:42:08 no-preload-864613 crio[770]: time="2025-12-17T00:42:08.462494646Z" level=info msg="Started container" PID=2935 containerID=c67ad1c7514547ea1429296e63ec746496fa46ed0312d2eb56c26b3d708320f3 description=default/busybox/busybox id=26fc566b-1d7d-487d-b209-c0f58d3c23ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=324e04e99d251e8d63b3eb33203d7aab13d4006f9f4735222716506538f65d0c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c67ad1c751454       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   324e04e99d251       busybox                                     default
	31b83acc24eae       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   ef4f7486b409d       storage-provisioner                         kube-system
	1c49f5f4979fe       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      11 seconds ago      Running             coredns                   0                   6331635794f01       coredns-7d764666f9-6ql6r                    kube-system
	06bfde5ee29e9       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   325c09773f221       kindnet-bpf4x                               kube-system
	c6119fc47be9b       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                      25 seconds ago      Running             kube-proxy                0                   4d616628faf2d       kube-proxy-2kddk                            kube-system
	db0526325d099       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                      35 seconds ago      Running             kube-controller-manager   0                   aa50a0c487077       kube-controller-manager-no-preload-864613   kube-system
	38e201c1b148f       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                      35 seconds ago      Running             kube-scheduler            0                   b1b93604c7f9c       kube-scheduler-no-preload-864613            kube-system
	92c2059525f31       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                      35 seconds ago      Running             kube-apiserver            0                   f018bd7436464       kube-apiserver-no-preload-864613            kube-system
	8509bd748a308       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      35 seconds ago      Running             etcd                      0                   f01d29442b0dd       etcd-no-preload-864613                      kube-system
	
	
	==> coredns [1c49f5f4979fec511a5df7f76f19c243c952f35ee6a81f095da2e5b7e32741e3] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:39071 - 17544 "HINFO IN 6881151423731077615.6898985927940065317. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.119054707s
	
	
	==> describe nodes <==
	Name:               no-preload-864613
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-864613
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=no-preload-864613
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_41_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:41:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-864613
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:42:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:42:15 +0000   Wed, 17 Dec 2025 00:41:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:42:15 +0000   Wed, 17 Dec 2025 00:41:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:42:15 +0000   Wed, 17 Dec 2025 00:41:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 00:42:15 +0000   Wed, 17 Dec 2025 00:42:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-864613
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                213ec30f-ec82-463e-b257-cb730a6beffc
	  Boot ID:                    0e9cedc6-c46e-4354-b3d2-9272a8b33ae5
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 coredns-7d764666f9-6ql6r                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-864613                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-bpf4x                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-no-preload-864613             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-no-preload-864613    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-2kddk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-no-preload-864613             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  26s   node-controller  Node no-preload-864613 event: Registered Node no-preload-864613 in Controller
	
	
	==> dmesg <==
	[  +0.089382] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024236] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.864694] kauditd_printk_skb: 47 callbacks suppressed
	[Dec17 00:07] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.006904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +2.048755] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +4.030595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +8.447143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[ +16.382404] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000015] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[Dec17 00:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	
	
	==> etcd [8509bd748a30893ba8e476953e721f6b5df4bb4c0693e47a10c084389201c379] <==
	{"level":"warn","ts":"2025-12-17T00:41:41.589191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:41:41.595733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:41:41.603549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:41:41.614266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:41:41.624120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:41:41.637938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:41:41.645967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:41:41.659174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:41:41.665706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:41:41.672234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:41:41.679106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:41:41.685779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:41:41.698583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:41:41.705482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:41:41.712960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:41:41.719280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:41:41.741240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:41:41.744924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:41:41.751743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:41:41.758467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:41:41.765280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50370","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T00:42:03.730656Z","caller":"traceutil/trace.go:172","msg":"trace[597548612] transaction","detail":"{read_only:false; response_revision:408; number_of_response:1; }","duration":"127.921283ms","start":"2025-12-17T00:42:03.602711Z","end":"2025-12-17T00:42:03.730632Z","steps":["trace[597548612] 'process raft request'  (duration: 65.888412ms)","trace[597548612] 'compare'  (duration: 61.536697ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T00:42:06.332668Z","caller":"traceutil/trace.go:172","msg":"trace[267663001] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"146.255976ms","start":"2025-12-17T00:42:06.186396Z","end":"2025-12-17T00:42:06.332652Z","steps":["trace[267663001] 'process raft request'  (duration: 146.140436ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T00:42:06.726708Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"191.029169ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T00:42:06.726831Z","caller":"traceutil/trace.go:172","msg":"trace[1239351292] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:432; }","duration":"191.147108ms","start":"2025-12-17T00:42:06.535648Z","end":"2025-12-17T00:42:06.726795Z","steps":["trace[1239351292] 'range keys from in-memory index tree'  (duration: 190.93168ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:42:16 up  1:24,  0 user,  load average: 3.33, 2.64, 1.82
	Linux no-preload-864613 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [06bfde5ee29e969ba07353ea78f37964f03da7aef1f5c54f2f41ffe149cfab31] <==
	I1217 00:41:52.795810       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 00:41:52.796228       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1217 00:41:52.796340       1 main.go:148] setting mtu 1500 for CNI 
	I1217 00:41:52.796357       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 00:41:52.796431       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T00:41:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 00:41:52.995093       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 00:41:52.995115       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 00:41:52.995125       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 00:41:52.995245       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 00:41:53.488731       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 00:41:53.488761       1 metrics.go:72] Registering metrics
	I1217 00:41:53.488870       1 controller.go:711] "Syncing nftables rules"
	I1217 00:42:02.998115       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 00:42:02.998191       1 main.go:301] handling current node
	I1217 00:42:12.998388       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 00:42:12.998433       1 main.go:301] handling current node
	
	
	==> kube-apiserver [92c2059525f31b8f9d928cfe17950cedc909bc478757b8d5912a41939acdbbab] <==
	I1217 00:41:42.324679       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 00:41:42.324684       1 cache.go:39] Caches are synced for autoregister controller
	I1217 00:41:42.325450       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 00:41:42.327536       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:41:42.327717       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1217 00:41:42.334933       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:41:42.521669       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 00:41:43.227622       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1217 00:41:43.233602       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1217 00:41:43.233622       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 00:41:43.701604       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 00:41:43.735934       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 00:41:43.830921       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 00:41:43.837615       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1217 00:41:43.838623       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 00:41:43.842410       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 00:41:44.246081       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 00:41:45.003483       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 00:41:45.016980       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 00:41:45.026742       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 00:41:49.950506       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:41:49.954024       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:41:50.050304       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 00:41:50.147840       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1217 00:42:14.556156       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:45624: use of closed network connection
	
	
	==> kube-controller-manager [db0526325d099e8bc1f177a88f60fd3193db095a00033e5a78c31ed14d022213] <==
	I1217 00:41:49.053817       1 shared_informer.go:377] "Caches are synced"
	I1217 00:41:49.053829       1 shared_informer.go:377] "Caches are synced"
	I1217 00:41:49.053858       1 shared_informer.go:377] "Caches are synced"
	I1217 00:41:49.053866       1 shared_informer.go:377] "Caches are synced"
	I1217 00:41:49.053713       1 shared_informer.go:377] "Caches are synced"
	I1217 00:41:49.053807       1 shared_informer.go:377] "Caches are synced"
	I1217 00:41:49.053900       1 range_allocator.go:177] "Sending events to api server"
	I1217 00:41:49.053941       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1217 00:41:49.053954       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:41:49.053960       1 shared_informer.go:377] "Caches are synced"
	I1217 00:41:49.054077       1 shared_informer.go:377] "Caches are synced"
	I1217 00:41:49.054293       1 shared_informer.go:377] "Caches are synced"
	I1217 00:41:49.054316       1 shared_informer.go:377] "Caches are synced"
	I1217 00:41:49.054453       1 shared_informer.go:377] "Caches are synced"
	I1217 00:41:49.054545       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1217 00:41:49.054640       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-864613"
	I1217 00:41:49.054687       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1217 00:41:49.058370       1 shared_informer.go:377] "Caches are synced"
	I1217 00:41:49.059287       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:41:49.062536       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-864613" podCIDRs=["10.244.0.0/24"]
	I1217 00:41:49.154673       1 shared_informer.go:377] "Caches are synced"
	I1217 00:41:49.154688       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 00:41:49.154693       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 00:41:49.159550       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:04.057473       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [c6119fc47be9ba0039117b06b90c65ee99d24414635130b3c0ff60c251df7a2d] <==
	I1217 00:41:50.556161       1 server_linux.go:53] "Using iptables proxy"
	I1217 00:41:50.618967       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:41:50.719755       1 shared_informer.go:377] "Caches are synced"
	I1217 00:41:50.719799       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1217 00:41:50.719891       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 00:41:50.740372       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 00:41:50.740417       1 server_linux.go:136] "Using iptables Proxier"
	I1217 00:41:50.746697       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 00:41:50.747268       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1217 00:41:50.747893       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:41:50.749510       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 00:41:50.749526       1 config.go:106] "Starting endpoint slice config controller"
	I1217 00:41:50.749532       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 00:41:50.749538       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 00:41:50.749580       1 config.go:200] "Starting service config controller"
	I1217 00:41:50.749597       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 00:41:50.750608       1 config.go:309] "Starting node config controller"
	I1217 00:41:50.750745       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 00:41:50.750754       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 00:41:50.850011       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 00:41:50.850020       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 00:41:50.850069       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [38e201c1b148f236edb36be3024e9aa4b404acb2461993aae9e32b963ecb545b] <==
	E1217 00:41:42.282803       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1217 00:41:42.282840       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1217 00:41:42.282843       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1217 00:41:42.282890       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1217 00:41:43.130657       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1217 00:41:43.131880       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1217 00:41:43.182289       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1217 00:41:43.183510       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1217 00:41:43.185678       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1217 00:41:43.186684       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1217 00:41:43.202929       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1217 00:41:43.203834       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1217 00:41:43.268552       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1217 00:41:43.269768       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1217 00:41:43.270978       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1217 00:41:43.272330       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1217 00:41:43.279764       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1217 00:41:43.280899       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1217 00:41:43.288144       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1217 00:41:43.290258       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1217 00:41:43.292303       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1217 00:41:43.296513       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1217 00:41:43.480485       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1217 00:41:43.481501       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	I1217 00:41:45.676337       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 00:41:50 no-preload-864613 kubelet[2247]: I1217 00:41:50.179773    2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0b42df61-fef2-41ff-83f3-0abede84a5fb-cni-cfg\") pod \"kindnet-bpf4x\" (UID: \"0b42df61-fef2-41ff-83f3-0abede84a5fb\") " pod="kube-system/kindnet-bpf4x"
	Dec 17 00:41:50 no-preload-864613 kubelet[2247]: I1217 00:41:50.179800    2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7153c193-9583-4abd-a828-ec1dc91151e2-xtables-lock\") pod \"kube-proxy-2kddk\" (UID: \"7153c193-9583-4abd-a828-ec1dc91151e2\") " pod="kube-system/kube-proxy-2kddk"
	Dec 17 00:41:50 no-preload-864613 kubelet[2247]: I1217 00:41:50.179819    2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6x56\" (UniqueName: \"kubernetes.io/projected/7153c193-9583-4abd-a828-ec1dc91151e2-kube-api-access-n6x56\") pod \"kube-proxy-2kddk\" (UID: \"7153c193-9583-4abd-a828-ec1dc91151e2\") " pod="kube-system/kube-proxy-2kddk"
	Dec 17 00:41:50 no-preload-864613 kubelet[2247]: I1217 00:41:50.179843    2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7153c193-9583-4abd-a828-ec1dc91151e2-kube-proxy\") pod \"kube-proxy-2kddk\" (UID: \"7153c193-9583-4abd-a828-ec1dc91151e2\") " pod="kube-system/kube-proxy-2kddk"
	Dec 17 00:41:50 no-preload-864613 kubelet[2247]: I1217 00:41:50.179863    2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b42df61-fef2-41ff-83f3-0abede84a5fb-xtables-lock\") pod \"kindnet-bpf4x\" (UID: \"0b42df61-fef2-41ff-83f3-0abede84a5fb\") " pod="kube-system/kindnet-bpf4x"
	Dec 17 00:41:50 no-preload-864613 kubelet[2247]: I1217 00:41:50.179920    2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg8cz\" (UniqueName: \"kubernetes.io/projected/0b42df61-fef2-41ff-83f3-0abede84a5fb-kube-api-access-zg8cz\") pod \"kindnet-bpf4x\" (UID: \"0b42df61-fef2-41ff-83f3-0abede84a5fb\") " pod="kube-system/kindnet-bpf4x"
	Dec 17 00:41:50 no-preload-864613 kubelet[2247]: I1217 00:41:50.914572    2247 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-2kddk" podStartSLOduration=0.914555256 podStartE2EDuration="914.555256ms" podCreationTimestamp="2025-12-17 00:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:41:50.91435632 +0000 UTC m=+6.154620654" watchObservedRunningTime="2025-12-17 00:41:50.914555256 +0000 UTC m=+6.154819591"
	Dec 17 00:41:52 no-preload-864613 kubelet[2247]: I1217 00:41:52.907707    2247 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-bpf4x" podStartSLOduration=0.823907848 podStartE2EDuration="2.90769035s" podCreationTimestamp="2025-12-17 00:41:50 +0000 UTC" firstStartedPulling="2025-12-17 00:41:50.475109897 +0000 UTC m=+5.715374226" lastFinishedPulling="2025-12-17 00:41:52.55889241 +0000 UTC m=+7.799156728" observedRunningTime="2025-12-17 00:41:52.907571822 +0000 UTC m=+8.147836157" watchObservedRunningTime="2025-12-17 00:41:52.90769035 +0000 UTC m=+8.147954685"
	Dec 17 00:41:53 no-preload-864613 kubelet[2247]: E1217 00:41:53.156631    2247 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-864613" containerName="kube-scheduler"
	Dec 17 00:41:56 no-preload-864613 kubelet[2247]: E1217 00:41:56.049793    2247 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-864613" containerName="etcd"
	Dec 17 00:41:57 no-preload-864613 kubelet[2247]: E1217 00:41:57.368667    2247 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-864613" containerName="kube-controller-manager"
	Dec 17 00:41:58 no-preload-864613 kubelet[2247]: E1217 00:41:58.610517    2247 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-864613" containerName="kube-apiserver"
	Dec 17 00:42:03 no-preload-864613 kubelet[2247]: E1217 00:42:03.163502    2247 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-864613" containerName="kube-scheduler"
	Dec 17 00:42:03 no-preload-864613 kubelet[2247]: I1217 00:42:03.503440    2247 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 17 00:42:03 no-preload-864613 kubelet[2247]: I1217 00:42:03.777669    2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bf26b73d-473d-43a0-bf42-4d69abdd9e31-tmp\") pod \"storage-provisioner\" (UID: \"bf26b73d-473d-43a0-bf42-4d69abdd9e31\") " pod="kube-system/storage-provisioner"
	Dec 17 00:42:03 no-preload-864613 kubelet[2247]: I1217 00:42:03.777729    2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fe29911-eb02-4cea-b42b-254fe65a4e65-config-volume\") pod \"coredns-7d764666f9-6ql6r\" (UID: \"7fe29911-eb02-4cea-b42b-254fe65a4e65\") " pod="kube-system/coredns-7d764666f9-6ql6r"
	Dec 17 00:42:03 no-preload-864613 kubelet[2247]: I1217 00:42:03.777758    2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5ljf\" (UniqueName: \"kubernetes.io/projected/7fe29911-eb02-4cea-b42b-254fe65a4e65-kube-api-access-g5ljf\") pod \"coredns-7d764666f9-6ql6r\" (UID: \"7fe29911-eb02-4cea-b42b-254fe65a4e65\") " pod="kube-system/coredns-7d764666f9-6ql6r"
	Dec 17 00:42:03 no-preload-864613 kubelet[2247]: I1217 00:42:03.777787    2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmnrt\" (UniqueName: \"kubernetes.io/projected/bf26b73d-473d-43a0-bf42-4d69abdd9e31-kube-api-access-cmnrt\") pod \"storage-provisioner\" (UID: \"bf26b73d-473d-43a0-bf42-4d69abdd9e31\") " pod="kube-system/storage-provisioner"
	Dec 17 00:42:04 no-preload-864613 kubelet[2247]: E1217 00:42:04.932227    2247 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-6ql6r" containerName="coredns"
	Dec 17 00:42:04 no-preload-864613 kubelet[2247]: I1217 00:42:04.948778    2247 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-6ql6r" podStartSLOduration=14.9487606 podStartE2EDuration="14.9487606s" podCreationTimestamp="2025-12-17 00:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:42:04.94865344 +0000 UTC m=+20.188917774" watchObservedRunningTime="2025-12-17 00:42:04.9487606 +0000 UTC m=+20.189024937"
	Dec 17 00:42:05 no-preload-864613 kubelet[2247]: E1217 00:42:05.942086    2247 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-6ql6r" containerName="coredns"
	Dec 17 00:42:06 no-preload-864613 kubelet[2247]: E1217 00:42:06.941584    2247 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-6ql6r" containerName="coredns"
	Dec 17 00:42:07 no-preload-864613 kubelet[2247]: I1217 00:42:07.448820    2247 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.448796287 podStartE2EDuration="16.448796287s" podCreationTimestamp="2025-12-17 00:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:42:04.975270174 +0000 UTC m=+20.215534509" watchObservedRunningTime="2025-12-17 00:42:07.448796287 +0000 UTC m=+22.689060621"
	Dec 17 00:42:07 no-preload-864613 kubelet[2247]: I1217 00:42:07.506562    2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkhnt\" (UniqueName: \"kubernetes.io/projected/a45ae093-ee01-4707-9a4c-570ad3b0770c-kube-api-access-pkhnt\") pod \"busybox\" (UID: \"a45ae093-ee01-4707-9a4c-570ad3b0770c\") " pod="default/busybox"
	Dec 17 00:42:08 no-preload-864613 kubelet[2247]: I1217 00:42:08.960740    2247 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.318772681 podStartE2EDuration="1.960720986s" podCreationTimestamp="2025-12-17 00:42:07 +0000 UTC" firstStartedPulling="2025-12-17 00:42:07.777180689 +0000 UTC m=+23.017445003" lastFinishedPulling="2025-12-17 00:42:08.419128991 +0000 UTC m=+23.659393308" observedRunningTime="2025-12-17 00:42:08.960289621 +0000 UTC m=+24.200553957" watchObservedRunningTime="2025-12-17 00:42:08.960720986 +0000 UTC m=+24.200985321"
	
	
	==> storage-provisioner [31b83acc24eae8c0e7dbd0d8e7b27f705e1244391d06c37b0e65e72cec84eec9] <==
	I1217 00:42:04.132433       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 00:42:04.144453       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 00:42:04.144513       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 00:42:04.148541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:04.161355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 00:42:04.161615       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 00:42:04.161862       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-864613_9d873e2e-8b39-44cd-ba08-2a485509a6d8!
	I1217 00:42:04.161795       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"77491952-ee3b-4988-94b4-88e7432dd743", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-864613_9d873e2e-8b39-44cd-ba08-2a485509a6d8 became leader
	W1217 00:42:04.173952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:04.178978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 00:42:04.262428       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-864613_9d873e2e-8b39-44cd-ba08-2a485509a6d8!
	W1217 00:42:06.184028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:06.333907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:08.338181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:08.343622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:10.346734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:10.350534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:12.354214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:12.358913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:14.363203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:14.368161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-864613 -n no-preload-864613
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-864613 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-742860 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-742860 --alsologtostderr -v=1: exit status 80 (2.08514465s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-742860 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:42:30.479099  288683 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:42:30.479408  288683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:42:30.479430  288683 out.go:374] Setting ErrFile to fd 2...
	I1217 00:42:30.479437  288683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:42:30.479745  288683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:42:30.480272  288683 out.go:368] Setting JSON to false
	I1217 00:42:30.480299  288683 mustload.go:66] Loading cluster: old-k8s-version-742860
	I1217 00:42:30.480800  288683 config.go:182] Loaded profile config "old-k8s-version-742860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 00:42:30.481470  288683 cli_runner.go:164] Run: docker container inspect old-k8s-version-742860 --format={{.State.Status}}
	I1217 00:42:30.510785  288683 host.go:66] Checking if "old-k8s-version-742860" exists ...
	I1217 00:42:30.511174  288683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:42:30.619670  288683 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:89 OomKillDisable:false NGoroutines:89 SystemTime:2025-12-17 00:42:30.601213446 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:42:30.620494  288683 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-742860 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 00:42:30.622849  288683 out.go:179] * Pausing node old-k8s-version-742860 ... 
	I1217 00:42:30.623910  288683 host.go:66] Checking if "old-k8s-version-742860" exists ...
	I1217 00:42:30.624254  288683 ssh_runner.go:195] Run: systemctl --version
	I1217 00:42:30.624314  288683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-742860
	I1217 00:42:30.651351  288683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/old-k8s-version-742860/id_rsa Username:docker}
	I1217 00:42:30.759320  288683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:42:30.780785  288683 pause.go:52] kubelet running: true
	I1217 00:42:30.780876  288683 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 00:42:31.006195  288683 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 00:42:31.006279  288683 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 00:42:31.086010  288683 cri.go:89] found id: "a4b60651ffd030d19b761fd3c47918c5ebbe75733ceae5c5e50c3e69b44beebb"
	I1217 00:42:31.086032  288683 cri.go:89] found id: "fbbd45021314326d8ef46c26a084d16832861775c1e3e32409593901efb2be3e"
	I1217 00:42:31.086038  288683 cri.go:89] found id: "98e5f84dacacedac773b65d9a13392572f7924b62854fb99ecd793603a8f1d34"
	I1217 00:42:31.086043  288683 cri.go:89] found id: "1cf05ac31ba64f118b784ca8ad4b1a57919383d731f155b068c9565667ca62b7"
	I1217 00:42:31.086048  288683 cri.go:89] found id: "2665e23f8c1d4b1b60afb71c02d698261aab64c7615ff9ebd12d544814363589"
	I1217 00:42:31.086052  288683 cri.go:89] found id: "adcc6538e3f24669f21be38c15820585a7a1c212e5fe02516c0874b1b88999cb"
	I1217 00:42:31.086057  288683 cri.go:89] found id: "ee16447f516ad3ada79ab1622f36739e4a14d0598fdc1d80fa27279d2d0e2ad8"
	I1217 00:42:31.086061  288683 cri.go:89] found id: "0051bcc55466b549d043d19c7acbc02084dfafcf4a1b9fd1b4704776608fde49"
	I1217 00:42:31.086067  288683 cri.go:89] found id: "d2bacdc7b5ee7149039abbb534298bb0d1c50567e36970b8dde0a69f80ccd23c"
	I1217 00:42:31.086075  288683 cri.go:89] found id: "c2f5e2e55fdb212b11fe534765cef6051904899119c3b0f0d2895cdc1bad1d6c"
	I1217 00:42:31.086080  288683 cri.go:89] found id: "7bfc386107bbed22f46f9153e98395f1b89a75e043668ed01443b61246824c81"
	I1217 00:42:31.086084  288683 cri.go:89] found id: ""
	I1217 00:42:31.086125  288683 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:42:31.098964  288683 retry.go:31] will retry after 308.484049ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:42:31Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:42:31.408796  288683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:42:31.422070  288683 pause.go:52] kubelet running: false
	I1217 00:42:31.422194  288683 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 00:42:31.574609  288683 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 00:42:31.574706  288683 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 00:42:31.651137  288683 cri.go:89] found id: "a4b60651ffd030d19b761fd3c47918c5ebbe75733ceae5c5e50c3e69b44beebb"
	I1217 00:42:31.651161  288683 cri.go:89] found id: "fbbd45021314326d8ef46c26a084d16832861775c1e3e32409593901efb2be3e"
	I1217 00:42:31.651168  288683 cri.go:89] found id: "98e5f84dacacedac773b65d9a13392572f7924b62854fb99ecd793603a8f1d34"
	I1217 00:42:31.651174  288683 cri.go:89] found id: "1cf05ac31ba64f118b784ca8ad4b1a57919383d731f155b068c9565667ca62b7"
	I1217 00:42:31.651180  288683 cri.go:89] found id: "2665e23f8c1d4b1b60afb71c02d698261aab64c7615ff9ebd12d544814363589"
	I1217 00:42:31.651187  288683 cri.go:89] found id: "adcc6538e3f24669f21be38c15820585a7a1c212e5fe02516c0874b1b88999cb"
	I1217 00:42:31.651192  288683 cri.go:89] found id: "ee16447f516ad3ada79ab1622f36739e4a14d0598fdc1d80fa27279d2d0e2ad8"
	I1217 00:42:31.651198  288683 cri.go:89] found id: "0051bcc55466b549d043d19c7acbc02084dfafcf4a1b9fd1b4704776608fde49"
	I1217 00:42:31.651203  288683 cri.go:89] found id: "d2bacdc7b5ee7149039abbb534298bb0d1c50567e36970b8dde0a69f80ccd23c"
	I1217 00:42:31.651234  288683 cri.go:89] found id: "c2f5e2e55fdb212b11fe534765cef6051904899119c3b0f0d2895cdc1bad1d6c"
	I1217 00:42:31.651244  288683 cri.go:89] found id: "7bfc386107bbed22f46f9153e98395f1b89a75e043668ed01443b61246824c81"
	I1217 00:42:31.651250  288683 cri.go:89] found id: ""
	I1217 00:42:31.651311  288683 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:42:31.667931  288683 retry.go:31] will retry after 514.46563ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:42:31Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:42:32.182720  288683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:42:32.198450  288683 pause.go:52] kubelet running: false
	I1217 00:42:32.198513  288683 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 00:42:32.375524  288683 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 00:42:32.375612  288683 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 00:42:32.452197  288683 cri.go:89] found id: "a4b60651ffd030d19b761fd3c47918c5ebbe75733ceae5c5e50c3e69b44beebb"
	I1217 00:42:32.452226  288683 cri.go:89] found id: "fbbd45021314326d8ef46c26a084d16832861775c1e3e32409593901efb2be3e"
	I1217 00:42:32.452232  288683 cri.go:89] found id: "98e5f84dacacedac773b65d9a13392572f7924b62854fb99ecd793603a8f1d34"
	I1217 00:42:32.452238  288683 cri.go:89] found id: "1cf05ac31ba64f118b784ca8ad4b1a57919383d731f155b068c9565667ca62b7"
	I1217 00:42:32.452242  288683 cri.go:89] found id: "2665e23f8c1d4b1b60afb71c02d698261aab64c7615ff9ebd12d544814363589"
	I1217 00:42:32.452247  288683 cri.go:89] found id: "adcc6538e3f24669f21be38c15820585a7a1c212e5fe02516c0874b1b88999cb"
	I1217 00:42:32.452252  288683 cri.go:89] found id: "ee16447f516ad3ada79ab1622f36739e4a14d0598fdc1d80fa27279d2d0e2ad8"
	I1217 00:42:32.452256  288683 cri.go:89] found id: "0051bcc55466b549d043d19c7acbc02084dfafcf4a1b9fd1b4704776608fde49"
	I1217 00:42:32.452260  288683 cri.go:89] found id: "d2bacdc7b5ee7149039abbb534298bb0d1c50567e36970b8dde0a69f80ccd23c"
	I1217 00:42:32.452270  288683 cri.go:89] found id: "c2f5e2e55fdb212b11fe534765cef6051904899119c3b0f0d2895cdc1bad1d6c"
	I1217 00:42:32.452274  288683 cri.go:89] found id: "7bfc386107bbed22f46f9153e98395f1b89a75e043668ed01443b61246824c81"
	I1217 00:42:32.452278  288683 cri.go:89] found id: ""
	I1217 00:42:32.452322  288683 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:42:32.467300  288683 out.go:203] 
	W1217 00:42:32.468445  288683 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:42:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:42:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:42:32.468476  288683 out.go:285] * 
	* 
	W1217 00:42:32.472683  288683 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:42:32.474124  288683 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-742860 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-742860
helpers_test.go:244: (dbg) docker inspect old-k8s-version-742860:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f3317a25ba0dd672e7c7b2056cadfb4682b7ff2475d42648d9662ef39b8f59b",
	        "Created": "2025-12-17T00:40:24.632786552Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 275022,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:41:35.880122954Z",
	            "FinishedAt": "2025-12-17T00:41:34.100458139Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/5f3317a25ba0dd672e7c7b2056cadfb4682b7ff2475d42648d9662ef39b8f59b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f3317a25ba0dd672e7c7b2056cadfb4682b7ff2475d42648d9662ef39b8f59b/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f3317a25ba0dd672e7c7b2056cadfb4682b7ff2475d42648d9662ef39b8f59b/hosts",
	        "LogPath": "/var/lib/docker/containers/5f3317a25ba0dd672e7c7b2056cadfb4682b7ff2475d42648d9662ef39b8f59b/5f3317a25ba0dd672e7c7b2056cadfb4682b7ff2475d42648d9662ef39b8f59b-json.log",
	        "Name": "/old-k8s-version-742860",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-742860:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-742860",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5f3317a25ba0dd672e7c7b2056cadfb4682b7ff2475d42648d9662ef39b8f59b",
	                "LowerDir": "/var/lib/docker/overlay2/b3872e7dcb375ce53f1001878e7871d4e0b55db5e9e018b728e1b163a393d733-init/diff:/var/lib/docker/overlay2/594b812fd6d8db89dab322ea9e00d43dd555e9709fb5e6953e3873cce717392c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3872e7dcb375ce53f1001878e7871d4e0b55db5e9e018b728e1b163a393d733/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3872e7dcb375ce53f1001878e7871d4e0b55db5e9e018b728e1b163a393d733/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3872e7dcb375ce53f1001878e7871d4e0b55db5e9e018b728e1b163a393d733/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-742860",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-742860/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-742860",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-742860",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-742860",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6924cecc87f284162ce88652370c5d238e3c8cb993429b76187c7aebf689f686",
	            "SandboxKey": "/var/run/docker/netns/6924cecc87f2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-742860": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "831a77a99d636c5f3163f99f25c807a931c002c29f68db2779eee3263784692b",
	                    "EndpointID": "88836b531f1124000066a35b25dfa76f960ba272df5bd09ed9a1b58a3921ea53",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ba:b9:f3:5b:18:51",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-742860",
	                        "5f3317a25ba0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-742860 -n old-k8s-version-742860
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-742860 -n old-k8s-version-742860: exit status 2 (338.942404ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-742860 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-742860 logs -n 25: (1.118276973s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ force-systemd-flag-452634 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-452634    │ jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ delete  │ -p force-systemd-flag-452634                                                                                                                                                                                                                  │ force-systemd-flag-452634    │ jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ start   │ -p cert-options-636512 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-636512          │ jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh     │ cert-options-636512 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-636512          │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh     │ -p cert-options-636512 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-636512          │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ delete  │ -p cert-options-636512                                                                                                                                                                                                                        │ cert-options-636512          │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ start   │ -p old-k8s-version-742860 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:41 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-742860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │                     │
	│ stop    │ -p old-k8s-version-742860 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:41 UTC │
	│ delete  │ -p stopped-upgrade-028618                                                                                                                                                                                                                     │ stopped-upgrade-028618       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:41 UTC │
	│ start   │ -p no-preload-864613 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-742860 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:41 UTC │
	│ start   │ -p old-k8s-version-742860 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p cert-expiration-753607 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-753607       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:41 UTC │
	│ delete  │ -p cert-expiration-753607                                                                                                                                                                                                                     │ cert-expiration-753607       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p embed-certs-153232 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ start   │ -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-803959    │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ start   │ -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-803959    │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ delete  │ -p kubernetes-upgrade-803959                                                                                                                                                                                                                  │ kubernetes-upgrade-803959    │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ delete  │ -p disable-driver-mounts-827138                                                                                                                                                                                                               │ disable-driver-mounts-827138 │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p default-k8s-diff-port-414413 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-864613 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ stop    │ -p no-preload-864613 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ image   │ old-k8s-version-742860 image list --format=json                                                                                                                                                                                               │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ pause   │ -p old-k8s-version-742860 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:42:13
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:42:13.425350  284412 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:42:13.425613  284412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:42:13.425622  284412 out.go:374] Setting ErrFile to fd 2...
	I1217 00:42:13.425627  284412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:42:13.425810  284412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:42:13.426400  284412 out.go:368] Setting JSON to false
	I1217 00:42:13.427820  284412 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5083,"bootTime":1765927050,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:42:13.427877  284412 start.go:143] virtualization: kvm guest
	I1217 00:42:13.429987  284412 out.go:179] * [default-k8s-diff-port-414413] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:42:13.431792  284412 notify.go:221] Checking for updates...
	I1217 00:42:13.431812  284412 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:42:13.433108  284412 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:42:13.434449  284412 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:42:13.435642  284412 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:42:12.992537  280822 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:42:13.007319  280822 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:42:13.044148  280822 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:42:13.044207  280822 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:13.054374  280822 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:42:13.054427  280822 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:13.063502  280822 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:13.073556  280822 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:13.082334  280822 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:42:13.090303  280822 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:13.098633  280822 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:13.111142  280822 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:13.119434  280822 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:42:13.126679  280822 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:42:13.134165  280822 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:13.215480  280822 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:42:13.345776  280822 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:42:13.345849  280822 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:42:13.349830  280822 start.go:564] Will wait 60s for crictl version
	I1217 00:42:13.349869  280822 ssh_runner.go:195] Run: which crictl
	I1217 00:42:13.353343  280822 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:42:13.379609  280822 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:42:13.379686  280822 ssh_runner.go:195] Run: crio --version
	I1217 00:42:13.408383  280822 ssh_runner.go:195] Run: crio --version
	I1217 00:42:13.438042  284412 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:42:13.438044  280822 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 00:42:13.439195  284412 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:42:13.440710  284412 config.go:182] Loaded profile config "embed-certs-153232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:42:13.440811  284412 config.go:182] Loaded profile config "no-preload-864613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:42:13.440892  284412 config.go:182] Loaded profile config "old-k8s-version-742860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 00:42:13.440976  284412 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:42:13.467911  284412 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:42:13.468056  284412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:42:13.531873  284412 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-17 00:42:13.522132291 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:42:13.531977  284412 docker.go:319] overlay module found
	I1217 00:42:13.535499  284412 out.go:179] * Using the docker driver based on user configuration
	I1217 00:42:13.536777  284412 start.go:309] selected driver: docker
	I1217 00:42:13.536790  284412 start.go:927] validating driver "docker" against <nil>
	I1217 00:42:13.536803  284412 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:42:13.537387  284412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:42:13.593933  284412 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-17 00:42:13.584322089 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:42:13.594200  284412 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:42:13.594521  284412 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:42:13.596131  284412 out.go:179] * Using Docker driver with root privileges
	I1217 00:42:13.597366  284412 cni.go:84] Creating CNI manager for ""
	I1217 00:42:13.597447  284412 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:42:13.597463  284412 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 00:42:13.597538  284412 start.go:353] cluster config:
	{Name:default-k8s-diff-port-414413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-414413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:42:13.599069  284412 out.go:179] * Starting "default-k8s-diff-port-414413" primary control-plane node in "default-k8s-diff-port-414413" cluster
	I1217 00:42:13.600353  284412 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:42:13.601785  284412 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:42:13.603055  284412 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:42:13.603094  284412 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:42:13.603099  284412 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1217 00:42:13.603137  284412 cache.go:65] Caching tarball of preloaded images
	I1217 00:42:13.603233  284412 preload.go:238] Found /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 00:42:13.603249  284412 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 00:42:13.603342  284412 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/config.json ...
	I1217 00:42:13.603366  284412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/config.json: {Name:mka6d4da213f94b51ee6aa8917291169616177e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:13.624885  284412 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:42:13.624903  284412 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:42:13.624919  284412 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:42:13.624956  284412 start.go:360] acquireMachinesLock for default-k8s-diff-port-414413: {Name:mke046f6be338ac4ae580049fea3f7de7f7546a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:42:13.625073  284412 start.go:364] duration metric: took 97.859µs to acquireMachinesLock for "default-k8s-diff-port-414413"
	I1217 00:42:13.625114  284412 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-414413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-414413 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:42:13.625172  284412 start.go:125] createHost starting for "" (driver="docker")
	W1217 00:42:12.692976  274821 pod_ready.go:104] pod "coredns-5dd5756b68-zsfnr" is not "Ready", error: <nil>
	W1217 00:42:14.694303  274821 pod_ready.go:104] pod "coredns-5dd5756b68-zsfnr" is not "Ready", error: <nil>
	I1217 00:42:13.439142  280822 cli_runner.go:164] Run: docker network inspect embed-certs-153232 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:42:13.459251  280822 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 00:42:13.466296  280822 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:42:13.477415  280822 kubeadm.go:884] updating cluster {Name:embed-certs-153232 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-153232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:42:13.477551  280822 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:42:13.477608  280822 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:42:13.521026  280822 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:42:13.521048  280822 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:42:13.521106  280822 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:42:13.548245  280822 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:42:13.548266  280822 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:42:13.548272  280822 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1217 00:42:13.548345  280822 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-153232 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-153232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:42:13.548402  280822 ssh_runner.go:195] Run: crio config
	I1217 00:42:13.600376  280822 cni.go:84] Creating CNI manager for ""
	I1217 00:42:13.600399  280822 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:42:13.600428  280822 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:42:13.600459  280822 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-153232 NodeName:embed-certs-153232 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:42:13.600593  280822 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-153232"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:42:13.600647  280822 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 00:42:13.609109  280822 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:42:13.609177  280822 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:42:13.618457  280822 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1217 00:42:13.632034  280822 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 00:42:13.646810  280822 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1217 00:42:13.659155  280822 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:42:13.663116  280822 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:42:13.673749  280822 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:13.766570  280822 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:42:13.791549  280822 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232 for IP: 192.168.85.2
	I1217 00:42:13.791567  280822 certs.go:195] generating shared ca certs ...
	I1217 00:42:13.791581  280822 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:13.791719  280822 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:42:13.791773  280822 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:42:13.791788  280822 certs.go:257] generating profile certs ...
	I1217 00:42:13.791858  280822 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/client.key
	I1217 00:42:13.791879  280822 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/client.crt with IP's: []
	I1217 00:42:13.931897  280822 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/client.crt ...
	I1217 00:42:13.931941  280822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/client.crt: {Name:mk8c25e57f2f8daa19706224acf3754a12f32af1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:13.932140  280822 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/client.key ...
	I1217 00:42:13.932159  280822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/client.key: {Name:mk2c9fe39e980f7adec2b0b45d4dad6dbf834896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:13.932305  280822 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/apiserver.key.9c5b6ce4
	I1217 00:42:13.932331  280822 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/apiserver.crt.9c5b6ce4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1217 00:42:13.954282  280822 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/apiserver.crt.9c5b6ce4 ...
	I1217 00:42:13.954312  280822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/apiserver.crt.9c5b6ce4: {Name:mk582e98b44497a2e848e683ee679e82a17cf5c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:13.954464  280822 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/apiserver.key.9c5b6ce4 ...
	I1217 00:42:13.954482  280822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/apiserver.key.9c5b6ce4: {Name:mka28b595b95ae43e4d2f00cd9b44a155e05e19e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:13.954583  280822 certs.go:382] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/apiserver.crt.9c5b6ce4 -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/apiserver.crt
	I1217 00:42:13.954679  280822 certs.go:386] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/apiserver.key.9c5b6ce4 -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/apiserver.key
	I1217 00:42:13.954768  280822 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/proxy-client.key
	I1217 00:42:13.954789  280822 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/proxy-client.crt with IP's: []
	I1217 00:42:14.024509  280822 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/proxy-client.crt ...
	I1217 00:42:14.024587  280822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/proxy-client.crt: {Name:mkc314325393e00688a5407669a5c2320665cfe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:14.024763  280822 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/proxy-client.key ...
	I1217 00:42:14.024780  280822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/proxy-client.key: {Name:mk79c7cee649273fe46757e20d434c5580381acb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:14.025167  280822 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:42:14.025224  280822 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:42:14.025235  280822 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:42:14.025268  280822 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:42:14.025301  280822 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:42:14.025330  280822 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:42:14.025395  280822 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:42:14.026196  280822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:42:14.049460  280822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:42:14.068747  280822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:42:14.087833  280822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:42:14.116430  280822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1217 00:42:14.139966  280822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:42:14.158228  280822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:42:14.176905  280822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:42:14.196729  280822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:42:14.215467  280822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:42:14.234299  280822 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:42:14.252944  280822 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:42:14.265467  280822 ssh_runner.go:195] Run: openssl version
	I1217 00:42:14.271634  280822 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:42:14.280744  280822 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:42:14.301216  280822 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:42:14.305112  280822 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:42:14.305164  280822 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	I1217 00:42:14.341152  280822 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:42:14.349191  280822 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/163542.pem /etc/ssl/certs/3ec20f2e.0
	I1217 00:42:14.357835  280822 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:14.367699  280822 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:42:14.377047  280822 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:14.380939  280822 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:14.381006  280822 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:14.415396  280822 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:42:14.423239  280822 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 00:42:14.431055  280822 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16354.pem
	I1217 00:42:14.438559  280822 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16354.pem /etc/ssl/certs/16354.pem
	I1217 00:42:14.446490  280822 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16354.pem
	I1217 00:42:14.450289  280822 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:13 /usr/share/ca-certificates/16354.pem
	I1217 00:42:14.450347  280822 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16354.pem
	I1217 00:42:14.489616  280822 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:42:14.498064  280822 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16354.pem /etc/ssl/certs/51391683.0
	I1217 00:42:14.506815  280822 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:42:14.510629  280822 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 00:42:14.510682  280822 kubeadm.go:401] StartCluster: {Name:embed-certs-153232 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-153232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:42:14.510761  280822 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:42:14.510808  280822 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:42:14.545675  280822 cri.go:89] found id: ""
	I1217 00:42:14.545767  280822 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:42:14.557101  280822 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:42:14.566472  280822 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:42:14.566523  280822 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:42:14.574810  280822 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:42:14.574828  280822 kubeadm.go:158] found existing configuration files:
	
	I1217 00:42:14.574870  280822 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 00:42:14.582762  280822 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:42:14.582807  280822 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:42:14.590725  280822 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 00:42:14.600690  280822 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:42:14.600743  280822 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:42:14.609796  280822 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 00:42:14.620158  280822 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:42:14.620209  280822 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:42:14.629063  280822 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 00:42:14.637594  280822 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:42:14.637641  280822 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:42:14.646132  280822 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:42:14.713244  280822 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 00:42:14.786963  280822 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:42:13.627508  284412 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 00:42:13.627696  284412 start.go:159] libmachine.API.Create for "default-k8s-diff-port-414413" (driver="docker")
	I1217 00:42:13.627727  284412 client.go:173] LocalClient.Create starting
	I1217 00:42:13.627818  284412 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem
	I1217 00:42:13.627856  284412 main.go:143] libmachine: Decoding PEM data...
	I1217 00:42:13.627875  284412 main.go:143] libmachine: Parsing certificate...
	I1217 00:42:13.627933  284412 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem
	I1217 00:42:13.627954  284412 main.go:143] libmachine: Decoding PEM data...
	I1217 00:42:13.627966  284412 main.go:143] libmachine: Parsing certificate...
	I1217 00:42:13.628342  284412 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-414413 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 00:42:13.646705  284412 cli_runner.go:211] docker network inspect default-k8s-diff-port-414413 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 00:42:13.646775  284412 network_create.go:284] running [docker network inspect default-k8s-diff-port-414413] to gather additional debugging logs...
	I1217 00:42:13.646799  284412 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-414413
	W1217 00:42:13.664461  284412 cli_runner.go:211] docker network inspect default-k8s-diff-port-414413 returned with exit code 1
	I1217 00:42:13.664484  284412 network_create.go:287] error running [docker network inspect default-k8s-diff-port-414413]: docker network inspect default-k8s-diff-port-414413: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-414413 not found
	I1217 00:42:13.664497  284412 network_create.go:289] output of [docker network inspect default-k8s-diff-port-414413]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-414413 not found
	
	** /stderr **
	I1217 00:42:13.664608  284412 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:42:13.681478  284412 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ffd1d738f01 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3d:52:75:47:82} reservation:<nil>}
	I1217 00:42:13.682165  284412 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-280edd437675 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:ae:02:b5:f9:a6} reservation:<nil>}
	I1217 00:42:13.682868  284412 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9f28d049043c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:3f:8e:e9:44:56} reservation:<nil>}
	I1217 00:42:13.683638  284412 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e4b870}
	I1217 00:42:13.683658  284412 network_create.go:124] attempt to create docker network default-k8s-diff-port-414413 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1217 00:42:13.683700  284412 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-414413 default-k8s-diff-port-414413
	I1217 00:42:13.738737  284412 network_create.go:108] docker network default-k8s-diff-port-414413 192.168.76.0/24 created
	I1217 00:42:13.738783  284412 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-414413" container
	I1217 00:42:13.738873  284412 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 00:42:13.756323  284412 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-414413 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-414413 --label created_by.minikube.sigs.k8s.io=true
	I1217 00:42:13.774421  284412 oci.go:103] Successfully created a docker volume default-k8s-diff-port-414413
	I1217 00:42:13.774523  284412 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-414413-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-414413 --entrypoint /usr/bin/test -v default-k8s-diff-port-414413:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 00:42:14.153339  284412 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-414413
	I1217 00:42:14.153415  284412 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:42:14.153431  284412 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 00:42:14.153511  284412 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-414413:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 00:42:18.328288  284412 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-414413:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.174730873s)
	I1217 00:42:18.328321  284412 kic.go:203] duration metric: took 4.174886362s to extract preloaded images to volume ...
	W1217 00:42:18.328406  284412 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 00:42:18.328443  284412 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 00:42:18.328502  284412 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 00:42:18.395411  284412 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-414413 --name default-k8s-diff-port-414413 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-414413 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-414413 --network default-k8s-diff-port-414413 --ip 192.168.76.2 --volume default-k8s-diff-port-414413:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	W1217 00:42:16.697857  274821 pod_ready.go:104] pod "coredns-5dd5756b68-zsfnr" is not "Ready", error: <nil>
	I1217 00:42:17.303259  274821 pod_ready.go:94] pod "coredns-5dd5756b68-zsfnr" is "Ready"
	I1217 00:42:17.303287  274821 pod_ready.go:86] duration metric: took 30.615694661s for pod "coredns-5dd5756b68-zsfnr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:17.312654  274821 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-742860" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:17.445225  274821 pod_ready.go:94] pod "etcd-old-k8s-version-742860" is "Ready"
	I1217 00:42:17.445248  274821 pod_ready.go:86] duration metric: took 132.564599ms for pod "etcd-old-k8s-version-742860" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:17.692662  274821 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-742860" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:17.697420  274821 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-742860" is "Ready"
	I1217 00:42:17.697441  274821 pod_ready.go:86] duration metric: took 4.754866ms for pod "kube-apiserver-old-k8s-version-742860" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:17.699980  274821 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-742860" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:17.703601  274821 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-742860" is "Ready"
	I1217 00:42:17.703618  274821 pod_ready.go:86] duration metric: took 3.596754ms for pod "kube-controller-manager-old-k8s-version-742860" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:17.705907  274821 pod_ready.go:83] waiting for pod "kube-proxy-ltxr5" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:17.991275  274821 pod_ready.go:94] pod "kube-proxy-ltxr5" is "Ready"
	I1217 00:42:17.991301  274821 pod_ready.go:86] duration metric: took 285.375737ms for pod "kube-proxy-ltxr5" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:18.191836  274821 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-742860" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:18.591537  274821 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-742860" is "Ready"
	I1217 00:42:18.591573  274821 pod_ready.go:86] duration metric: took 399.708553ms for pod "kube-scheduler-old-k8s-version-742860" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:18.591588  274821 pod_ready.go:40] duration metric: took 31.90774753s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:42:18.637874  274821 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1217 00:42:18.639849  274821 out.go:203] 
	W1217 00:42:18.641170  274821 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1217 00:42:18.642421  274821 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1217 00:42:18.643773  274821 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-742860" cluster and "default" namespace by default
	I1217 00:42:18.696115  284412 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Running}}
	I1217 00:42:18.718913  284412 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Status}}
	I1217 00:42:18.741656  284412 cli_runner.go:164] Run: docker exec default-k8s-diff-port-414413 stat /var/lib/dpkg/alternatives/iptables
	I1217 00:42:18.795516  284412 oci.go:144] the created container "default-k8s-diff-port-414413" has a running status.
	I1217 00:42:18.795548  284412 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa...
	I1217 00:42:18.846177  284412 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 00:42:18.882262  284412 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Status}}
	I1217 00:42:18.903843  284412 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 00:42:18.903865  284412 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-414413 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 00:42:18.968216  284412 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Status}}
	I1217 00:42:18.991308  284412 machine.go:94] provisionDockerMachine start ...
	I1217 00:42:18.991413  284412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:42:19.014834  284412 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:19.015144  284412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1217 00:42:19.015159  284412 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:42:19.015961  284412 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34784->127.0.0.1:33078: read: connection reset by peer
	I1217 00:42:22.161150  284412 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-414413
	
	I1217 00:42:22.161174  284412 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-414413"
	I1217 00:42:22.161235  284412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:42:22.178942  284412 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:22.179175  284412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1217 00:42:22.179191  284412 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-414413 && echo "default-k8s-diff-port-414413" | sudo tee /etc/hostname
	I1217 00:42:22.324778  284412 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-414413
	
	I1217 00:42:22.324870  284412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:42:22.343254  284412 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:22.343460  284412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1217 00:42:22.343477  284412 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-414413' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-414413/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-414413' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:42:22.466476  284412 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:42:22.466503  284412 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:42:22.466529  284412 ubuntu.go:190] setting up certificates
	I1217 00:42:22.466539  284412 provision.go:84] configureAuth start
	I1217 00:42:22.466589  284412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-414413
	I1217 00:42:22.484296  284412 provision.go:143] copyHostCerts
	I1217 00:42:22.484359  284412 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:42:22.484371  284412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:42:22.484441  284412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:42:22.485116  284412 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:42:22.485147  284412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:42:22.485196  284412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:42:22.485305  284412 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:42:22.485312  284412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:42:22.485353  284412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:42:22.485441  284412 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-414413 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-414413 localhost minikube]
	I1217 00:42:22.536004  284412 provision.go:177] copyRemoteCerts
	I1217 00:42:22.536055  284412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:42:22.536091  284412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:42:22.554318  284412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:42:22.645690  284412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:42:22.664448  284412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:42:22.680722  284412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1217 00:42:22.697045  284412 provision.go:87] duration metric: took 230.488272ms to configureAuth
	I1217 00:42:22.697069  284412 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:42:22.697252  284412 config.go:182] Loaded profile config "default-k8s-diff-port-414413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:42:22.697339  284412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:42:22.714736  284412 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:22.714937  284412 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1217 00:42:22.714952  284412 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:42:22.976459  284412 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:42:22.976488  284412 machine.go:97] duration metric: took 3.985157633s to provisionDockerMachine
	I1217 00:42:22.976498  284412 client.go:176] duration metric: took 9.348761675s to LocalClient.Create
	I1217 00:42:22.976516  284412 start.go:167] duration metric: took 9.348820581s to libmachine.API.Create "default-k8s-diff-port-414413"
	I1217 00:42:22.976523  284412 start.go:293] postStartSetup for "default-k8s-diff-port-414413" (driver="docker")
	I1217 00:42:22.976532  284412 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:42:22.976602  284412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:42:22.976638  284412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:42:22.994665  284412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:42:23.090217  284412 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:42:23.093763  284412 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:42:23.093787  284412 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:42:23.093798  284412 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:42:23.093844  284412 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:42:23.093915  284412 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:42:23.094019  284412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:42:23.101129  284412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:42:23.123842  284412 start.go:296] duration metric: took 147.299871ms for postStartSetup
	I1217 00:42:23.124296  284412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-414413
	I1217 00:42:23.147842  284412 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/config.json ...
	I1217 00:42:23.148132  284412 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:42:23.148186  284412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:42:23.169849  284412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:42:23.264873  284412 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:42:23.270048  284412 start.go:128] duration metric: took 9.644861332s to createHost
	I1217 00:42:23.270073  284412 start.go:83] releasing machines lock for "default-k8s-diff-port-414413", held for 9.644974115s
	I1217 00:42:23.270140  284412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-414413
	I1217 00:42:23.290363  284412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:42:23.290422  284412 ssh_runner.go:195] Run: cat /version.json
	I1217 00:42:23.290475  284412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:42:23.290480  284412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:42:23.312103  284412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:42:23.312872  284412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:42:23.475013  284412 ssh_runner.go:195] Run: systemctl --version
	I1217 00:42:23.483020  284412 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:42:23.525259  284412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:42:23.531281  284412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:42:23.531351  284412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:42:23.559880  284412 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 00:42:23.559906  284412 start.go:496] detecting cgroup driver to use...
	I1217 00:42:23.559937  284412 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:42:23.559986  284412 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:42:23.578944  284412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:42:23.593091  284412 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:42:23.593148  284412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:42:23.611935  284412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:42:23.631472  284412 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:42:23.715792  284412 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:42:23.810158  284412 docker.go:234] disabling docker service ...
	I1217 00:42:23.810219  284412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:42:23.831038  284412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:42:23.844184  284412 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:42:23.945679  284412 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:42:24.040529  284412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:42:24.053221  284412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:42:24.067028  284412 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:42:24.067088  284412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:24.077679  284412 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:42:24.077758  284412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:24.086530  284412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:24.094927  284412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:24.103190  284412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:42:24.110927  284412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:24.119110  284412 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:24.131728  284412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:24.141205  284412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:42:24.149980  284412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:42:24.158509  284412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:24.245067  284412 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:42:24.383439  284412 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:42:24.383491  284412 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:42:24.387340  284412 start.go:564] Will wait 60s for crictl version
	I1217 00:42:24.387393  284412 ssh_runner.go:195] Run: which crictl
	I1217 00:42:24.391594  284412 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:42:24.421479  284412 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:42:24.421571  284412 ssh_runner.go:195] Run: crio --version
	I1217 00:42:24.450403  284412 ssh_runner.go:195] Run: crio --version
	I1217 00:42:24.481154  284412 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 00:42:25.363097  280822 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1217 00:42:25.363180  280822 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:42:25.363310  280822 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:42:25.363378  280822 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 00:42:25.363416  280822 kubeadm.go:319] OS: Linux
	I1217 00:42:25.363457  280822 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:42:25.363517  280822 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:42:25.363596  280822 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:42:25.363654  280822 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:42:25.363693  280822 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:42:25.363730  280822 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:42:25.363795  280822 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:42:25.363867  280822 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 00:42:25.363976  280822 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:42:25.364122  280822 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:42:25.364261  280822 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:42:25.364325  280822 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:42:25.365770  280822 out.go:252]   - Generating certificates and keys ...
	I1217 00:42:25.365844  280822 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:42:25.365909  280822 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:42:25.365981  280822 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 00:42:25.366082  280822 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 00:42:25.366176  280822 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 00:42:25.366254  280822 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 00:42:25.366329  280822 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 00:42:25.366529  280822 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-153232 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 00:42:25.366608  280822 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 00:42:25.366730  280822 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-153232 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 00:42:25.366787  280822 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 00:42:25.366842  280822 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 00:42:25.366884  280822 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 00:42:25.366962  280822 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:42:25.367066  280822 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:42:25.367171  280822 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:42:25.367245  280822 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:42:25.367339  280822 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:42:25.367427  280822 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:42:25.367530  280822 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:42:25.367598  280822 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:42:25.369424  280822 out.go:252]   - Booting up control plane ...
	I1217 00:42:25.369555  280822 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:42:25.369664  280822 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:42:25.369767  280822 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:42:25.369905  280822 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:42:25.370030  280822 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:42:25.370130  280822 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:42:25.370207  280822 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:42:25.370245  280822 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:42:25.370382  280822 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:42:25.370502  280822 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:42:25.370597  280822 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001771621s
	I1217 00:42:25.370713  280822 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 00:42:25.370817  280822 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1217 00:42:25.370926  280822 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 00:42:25.371056  280822 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 00:42:25.371173  280822 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.504574196s
	I1217 00:42:25.371269  280822 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.209644719s
	I1217 00:42:25.371361  280822 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001875065s
	I1217 00:42:25.371507  280822 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 00:42:25.371667  280822 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 00:42:25.371724  280822 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 00:42:25.371914  280822 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-153232 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 00:42:25.371962  280822 kubeadm.go:319] [bootstrap-token] Using token: ko15qb.now6jl3ph6pd34nn
	I1217 00:42:25.373295  280822 out.go:252]   - Configuring RBAC rules ...
	I1217 00:42:25.373384  280822 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 00:42:25.373450  280822 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 00:42:25.373567  280822 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 00:42:25.373672  280822 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 00:42:25.373769  280822 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 00:42:25.373872  280822 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 00:42:25.374019  280822 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 00:42:25.374095  280822 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 00:42:25.374176  280822 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 00:42:25.374190  280822 kubeadm.go:319] 
	I1217 00:42:25.374276  280822 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 00:42:25.374285  280822 kubeadm.go:319] 
	I1217 00:42:25.374392  280822 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 00:42:25.374415  280822 kubeadm.go:319] 
	I1217 00:42:25.374456  280822 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 00:42:25.374539  280822 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 00:42:25.374616  280822 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 00:42:25.374622  280822 kubeadm.go:319] 
	I1217 00:42:25.374701  280822 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 00:42:25.374709  280822 kubeadm.go:319] 
	I1217 00:42:25.374777  280822 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 00:42:25.374788  280822 kubeadm.go:319] 
	I1217 00:42:25.374861  280822 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 00:42:25.374985  280822 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 00:42:25.375120  280822 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 00:42:25.375129  280822 kubeadm.go:319] 
	I1217 00:42:25.375253  280822 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 00:42:25.375380  280822 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 00:42:25.375397  280822 kubeadm.go:319] 
	I1217 00:42:25.375478  280822 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ko15qb.now6jl3ph6pd34nn \
	I1217 00:42:25.375578  280822 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a7c34974519aee4953e03245da076d7a2eba06e40135880a85806e2dab303fa1 \
	I1217 00:42:25.375596  280822 kubeadm.go:319] 	--control-plane 
	I1217 00:42:25.375605  280822 kubeadm.go:319] 
	I1217 00:42:25.375692  280822 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 00:42:25.375704  280822 kubeadm.go:319] 
	I1217 00:42:25.375776  280822 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ko15qb.now6jl3ph6pd34nn \
	I1217 00:42:25.375892  280822 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a7c34974519aee4953e03245da076d7a2eba06e40135880a85806e2dab303fa1 
	I1217 00:42:25.375912  280822 cni.go:84] Creating CNI manager for ""
	I1217 00:42:25.375921  280822 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:42:25.377461  280822 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 00:42:24.482181  284412 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-414413 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:42:24.500103  284412 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 00:42:24.504033  284412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:42:24.514203  284412 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-414413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-414413 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:42:24.514357  284412 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:42:24.514420  284412 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:42:24.549089  284412 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:42:24.549110  284412 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:42:24.549168  284412 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:42:24.576769  284412 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:42:24.576789  284412 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:42:24.576796  284412 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.2 crio true true} ...
	I1217 00:42:24.576874  284412 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-414413 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-414413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:42:24.576933  284412 ssh_runner.go:195] Run: crio config
	I1217 00:42:24.636708  284412 cni.go:84] Creating CNI manager for ""
	I1217 00:42:24.636734  284412 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:42:24.636754  284412 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:42:24.636784  284412 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-414413 NodeName:default-k8s-diff-port-414413 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:42:24.636928  284412 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-414413"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:42:24.636985  284412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 00:42:24.645729  284412 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:42:24.645790  284412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:42:24.653240  284412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1217 00:42:24.665529  284412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 00:42:24.679521  284412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1217 00:42:24.692478  284412 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:42:24.696237  284412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:42:24.707607  284412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:24.796697  284412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:42:24.823776  284412 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413 for IP: 192.168.76.2
	I1217 00:42:24.823796  284412 certs.go:195] generating shared ca certs ...
	I1217 00:42:24.823815  284412 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:24.824002  284412 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:42:24.824058  284412 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:42:24.824071  284412 certs.go:257] generating profile certs ...
	I1217 00:42:24.824139  284412 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/client.key
	I1217 00:42:24.824163  284412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/client.crt with IP's: []
	I1217 00:42:24.852562  284412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/client.crt ...
	I1217 00:42:24.852599  284412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/client.crt: {Name:mka80e8090ced21fe214c4429fd9ec50414b69e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:24.852809  284412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/client.key ...
	I1217 00:42:24.852839  284412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/client.key: {Name:mk62a29ba28613a4004b4ccae22fd87b0d191523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:24.852959  284412 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/apiserver.key.0797176d
	I1217 00:42:24.852983  284412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/apiserver.crt.0797176d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 00:42:24.967203  284412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/apiserver.crt.0797176d ...
	I1217 00:42:24.967231  284412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/apiserver.crt.0797176d: {Name:mkb07a0cfa1d161554e1e162c2d1dd89d2a5dfbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:24.967410  284412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/apiserver.key.0797176d ...
	I1217 00:42:24.967429  284412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/apiserver.key.0797176d: {Name:mk55630e42ed064bd9f3f3fc63dff82f33584198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:24.967535  284412 certs.go:382] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/apiserver.crt.0797176d -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/apiserver.crt
	I1217 00:42:24.967638  284412 certs.go:386] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/apiserver.key.0797176d -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/apiserver.key
	I1217 00:42:24.967728  284412 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/proxy-client.key
	I1217 00:42:24.967750  284412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/proxy-client.crt with IP's: []
	I1217 00:42:25.084560  284412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/proxy-client.crt ...
	I1217 00:42:25.084586  284412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/proxy-client.crt: {Name:mk9cec1d5cf5471bb58286e60be308e98426f625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:25.084739  284412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/proxy-client.key ...
	I1217 00:42:25.084752  284412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/proxy-client.key: {Name:mk7ab6737b164e794c0b4e6ebb0e06044ef10b05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:25.084929  284412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:42:25.084967  284412 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:42:25.084978  284412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:42:25.085011  284412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:42:25.085035  284412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:42:25.085059  284412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:42:25.085101  284412 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:42:25.085590  284412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:42:25.103620  284412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:42:25.119905  284412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:42:25.136724  284412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:42:25.155524  284412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 00:42:25.173583  284412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:42:25.190975  284412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:42:25.208690  284412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:42:25.224639  284412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:42:25.242235  284412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:42:25.258717  284412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:42:25.274690  284412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:42:25.286214  284412 ssh_runner.go:195] Run: openssl version
	I1217 00:42:25.292258  284412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16354.pem
	I1217 00:42:25.299149  284412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16354.pem /etc/ssl/certs/16354.pem
	I1217 00:42:25.306151  284412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16354.pem
	I1217 00:42:25.309759  284412 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:13 /usr/share/ca-certificates/16354.pem
	I1217 00:42:25.309812  284412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16354.pem
	I1217 00:42:25.344499  284412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:42:25.351424  284412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16354.pem /etc/ssl/certs/51391683.0
	I1217 00:42:25.358983  284412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:42:25.366943  284412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:42:25.374915  284412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:42:25.378552  284412 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:42:25.378596  284412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	I1217 00:42:25.419137  284412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:42:25.427115  284412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/163542.pem /etc/ssl/certs/3ec20f2e.0
	I1217 00:42:25.434488  284412 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:25.442142  284412 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:42:25.450136  284412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:25.454062  284412 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:25.454109  284412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:25.494451  284412 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:42:25.503079  284412 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 00:42:25.510443  284412 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:42:25.514221  284412 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 00:42:25.514276  284412 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-414413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-414413 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:42:25.514372  284412 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:42:25.514440  284412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:42:25.548742  284412 cri.go:89] found id: ""
	I1217 00:42:25.548820  284412 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:42:25.557060  284412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:42:25.565878  284412 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:42:25.565934  284412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:42:25.573969  284412 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:42:25.573984  284412 kubeadm.go:158] found existing configuration files:
	
	I1217 00:42:25.574040  284412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1217 00:42:25.581656  284412 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:42:25.581709  284412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:42:25.590704  284412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1217 00:42:25.599408  284412 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:42:25.599480  284412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:42:25.607459  284412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1217 00:42:25.615333  284412 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:42:25.615378  284412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:42:25.623098  284412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1217 00:42:25.633138  284412 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:42:25.633310  284412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:42:25.643498  284412 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:42:25.703723  284412 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1217 00:42:25.703802  284412 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:42:25.728550  284412 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:42:25.728647  284412 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 00:42:25.728711  284412 kubeadm.go:319] OS: Linux
	I1217 00:42:25.728765  284412 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:42:25.729383  284412 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:42:25.729459  284412 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:42:25.729525  284412 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:42:25.729591  284412 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:42:25.729662  284412 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:42:25.729731  284412 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:42:25.729818  284412 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 00:42:25.791909  284412 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:42:25.792121  284412 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:42:25.792257  284412 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:42:25.800835  284412 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:42:25.378471  280822 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 00:42:25.382353  280822 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1217 00:42:25.382368  280822 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1217 00:42:25.395147  280822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 00:42:25.614156  280822 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 00:42:25.614230  280822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:25.614258  280822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-153232 minikube.k8s.io/updated_at=2025_12_17T00_42_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1 minikube.k8s.io/name=embed-certs-153232 minikube.k8s.io/primary=true
	I1217 00:42:25.623979  280822 ops.go:34] apiserver oom_adj: -16
	I1217 00:42:25.709129  280822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:26.210217  280822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:26.709257  280822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:27.209215  280822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:27.710023  280822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:25.806145  284412 out.go:252]   - Generating certificates and keys ...
	I1217 00:42:25.806235  284412 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:42:25.806314  284412 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:42:25.881909  284412 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 00:42:25.972413  284412 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 00:42:26.080519  284412 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 00:42:26.376564  284412 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 00:42:26.626312  284412 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 00:42:26.627204  284412 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-414413 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 00:42:26.838913  284412 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 00:42:26.839117  284412 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-414413 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 00:42:27.110011  284412 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 00:42:27.251069  284412 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 00:42:27.658928  284412 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 00:42:27.659053  284412 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:42:27.852972  284412 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:42:28.066138  284412 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:42:28.730380  284412 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:42:28.996408  284412 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:42:29.207321  284412 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:42:29.207879  284412 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:42:29.212404  284412 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:42:28.209399  280822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:28.709273  280822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:29.209905  280822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:29.709366  280822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:30.210194  280822 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:30.282220  280822 kubeadm.go:1114] duration metric: took 4.668029947s to wait for elevateKubeSystemPrivileges
	I1217 00:42:30.282269  280822 kubeadm.go:403] duration metric: took 15.771591882s to StartCluster
	I1217 00:42:30.282291  280822 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:30.282367  280822 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:42:30.284526  280822 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:30.284763  280822 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 00:42:30.284789  280822 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:42:30.284861  280822 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:42:30.285025  280822 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-153232"
	I1217 00:42:30.285041  280822 addons.go:70] Setting default-storageclass=true in profile "embed-certs-153232"
	I1217 00:42:30.285054  280822 config.go:182] Loaded profile config "embed-certs-153232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:42:30.285085  280822 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-153232"
	I1217 00:42:30.285050  280822 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-153232"
	I1217 00:42:30.285168  280822 host.go:66] Checking if "embed-certs-153232" exists ...
	I1217 00:42:30.285442  280822 cli_runner.go:164] Run: docker container inspect embed-certs-153232 --format={{.State.Status}}
	I1217 00:42:30.285643  280822 cli_runner.go:164] Run: docker container inspect embed-certs-153232 --format={{.State.Status}}
	I1217 00:42:30.286200  280822 out.go:179] * Verifying Kubernetes components...
	I1217 00:42:30.287754  280822 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:30.309129  280822 addons.go:239] Setting addon default-storageclass=true in "embed-certs-153232"
	I1217 00:42:30.309175  280822 host.go:66] Checking if "embed-certs-153232" exists ...
	I1217 00:42:30.309631  280822 cli_runner.go:164] Run: docker container inspect embed-certs-153232 --format={{.State.Status}}
	I1217 00:42:30.313090  280822 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:42:30.314146  280822 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:42:30.314162  280822 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:42:30.314215  280822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:42:30.341831  280822 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:42:30.341853  280822 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:42:30.341927  280822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:42:30.343949  280822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/embed-certs-153232/id_rsa Username:docker}
	I1217 00:42:30.377242  280822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/embed-certs-153232/id_rsa Username:docker}
	I1217 00:42:30.403273  280822 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 00:42:30.463094  280822 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:42:30.490014  280822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:42:30.523196  280822 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:42:30.737525  280822 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1217 00:42:30.738713  280822 node_ready.go:35] waiting up to 6m0s for node "embed-certs-153232" to be "Ready" ...
	I1217 00:42:30.955302  280822 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Dec 17 00:42:06 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:06.336153207Z" level=info msg="Started container" PID=1728 containerID=324ad454b6f2d48651270a786dfd3cf704b868b70b7f2becfd1846029681442c description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l/dashboard-metrics-scraper id=fbc2917e-e10b-49ce-a2a2-5d30513d0add name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a3cfac6985eeef058d7ca64769e8a0c4845aeed1d56a82c91b017277e982f51
	Dec 17 00:42:07 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:07.182087494Z" level=info msg="Removing container: 4d4e827663dd03284a764ac2f5372619101f2e614e72c6ba517394082e67c65c" id=c06753bf-68f9-4bb7-8444-e61daa6fa874 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:42:07 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:07.202088561Z" level=info msg="Removed container 4d4e827663dd03284a764ac2f5372619101f2e614e72c6ba517394082e67c65c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l/dashboard-metrics-scraper" id=c06753bf-68f9-4bb7-8444-e61daa6fa874 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:42:17 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:17.123123739Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4b990ae4-c208-4fa0-acef-32e723ce7e4e name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:42:17 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:17.124099048Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=23563d03-7cd9-42e5-af03-39eee936dc2c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:42:17 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:17.152904487Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d724efa8-b33b-4615-928a-9b1450923521 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:42:17 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:17.153060316Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:17 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:17.213324004Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:17 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:17.213536791Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2e9ae91bafd056c4b78af48bb26353eec43bd54356f0718859ccefe177147b62/merged/etc/passwd: no such file or directory"
	Dec 17 00:42:17 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:17.213565143Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2e9ae91bafd056c4b78af48bb26353eec43bd54356f0718859ccefe177147b62/merged/etc/group: no such file or directory"
	Dec 17 00:42:17 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:17.213875854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:17 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:17.308156301Z" level=info msg="Created container a4b60651ffd030d19b761fd3c47918c5ebbe75733ceae5c5e50c3e69b44beebb: kube-system/storage-provisioner/storage-provisioner" id=d724efa8-b33b-4615-928a-9b1450923521 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:42:17 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:17.308872007Z" level=info msg="Starting container: a4b60651ffd030d19b761fd3c47918c5ebbe75733ceae5c5e50c3e69b44beebb" id=4bb73f62-e5bf-4c0b-b307-9635f445bc74 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:42:17 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:17.311306225Z" level=info msg="Started container" PID=1749 containerID=a4b60651ffd030d19b761fd3c47918c5ebbe75733ceae5c5e50c3e69b44beebb description=kube-system/storage-provisioner/storage-provisioner id=4bb73f62-e5bf-4c0b-b307-9635f445bc74 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c278dcf4f5cfb7b95abffae9c794456e1ad173ca8dfc893bd73c9bf4841c6dba
	Dec 17 00:42:23 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:23.007655927Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c49a7e86-5671-405a-a3d6-47980d6456b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:42:23 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:23.008606496Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f7e0ae95-c548-4ae3-99e2-5540fbdcedbe name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:42:23 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:23.009741431Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l/dashboard-metrics-scraper" id=ecfb2c4b-dae6-49f3-97b1-7ff81ebc4aab name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:42:23 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:23.009860398Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:23 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:23.016038459Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:23 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:23.016454842Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:23 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:23.042190667Z" level=info msg="Created container c2f5e2e55fdb212b11fe534765cef6051904899119c3b0f0d2895cdc1bad1d6c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l/dashboard-metrics-scraper" id=ecfb2c4b-dae6-49f3-97b1-7ff81ebc4aab name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:42:23 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:23.042730485Z" level=info msg="Starting container: c2f5e2e55fdb212b11fe534765cef6051904899119c3b0f0d2895cdc1bad1d6c" id=4c511198-2d0a-4e86-a9ad-fdb67b92f828 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:42:23 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:23.044361397Z" level=info msg="Started container" PID=1781 containerID=c2f5e2e55fdb212b11fe534765cef6051904899119c3b0f0d2895cdc1bad1d6c description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l/dashboard-metrics-scraper id=4c511198-2d0a-4e86-a9ad-fdb67b92f828 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a3cfac6985eeef058d7ca64769e8a0c4845aeed1d56a82c91b017277e982f51
	Dec 17 00:42:23 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:23.143918989Z" level=info msg="Removing container: 324ad454b6f2d48651270a786dfd3cf704b868b70b7f2becfd1846029681442c" id=df405ba9-53c1-4115-a067-3a022f7e7aaa name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:42:23 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:23.154135202Z" level=info msg="Removed container 324ad454b6f2d48651270a786dfd3cf704b868b70b7f2becfd1846029681442c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l/dashboard-metrics-scraper" id=df405ba9-53c1-4115-a067-3a022f7e7aaa name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	c2f5e2e55fdb2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago      Exited              dashboard-metrics-scraper   2                   3a3cfac6985ee       dashboard-metrics-scraper-5f989dc9cf-hbb7l       kubernetes-dashboard
	a4b60651ffd03       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           16 seconds ago      Running             storage-provisioner         1                   c278dcf4f5cfb       storage-provisioner                              kube-system
	7bfc386107bbe       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   31 seconds ago      Running             kubernetes-dashboard        0                   3ba73abd27b0d       kubernetes-dashboard-8694d4445c-hl62s            kubernetes-dashboard
	9af078d322a28       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           47 seconds ago      Running             busybox                     1                   c208b89a4b136       busybox                                          default
	fbbd450213143       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           47 seconds ago      Running             coredns                     0                   ddbd1dea0f2dc       coredns-5dd5756b68-zsfnr                         kube-system
	98e5f84dacace       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           47 seconds ago      Running             kube-proxy                  0                   adce65bacc882       kube-proxy-ltxr5                                 kube-system
	1cf05ac31ba64       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           47 seconds ago      Running             kindnet-cni                 0                   3494d5997013d       kindnet-9sklv                                    kube-system
	2665e23f8c1d4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           47 seconds ago      Exited              storage-provisioner         0                   c278dcf4f5cfb       storage-provisioner                              kube-system
	adcc6538e3f24       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           51 seconds ago      Running             etcd                        0                   4542710d8ef9a       etcd-old-k8s-version-742860                      kube-system
	ee16447f516ad       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           51 seconds ago      Running             kube-apiserver              0                   f913bcfddc2ba       kube-apiserver-old-k8s-version-742860            kube-system
	0051bcc55466b       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           51 seconds ago      Running             kube-controller-manager     0                   b5cb735fa0175       kube-controller-manager-old-k8s-version-742860   kube-system
	d2bacdc7b5ee7       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           51 seconds ago      Running             kube-scheduler              0                   57cca05a9bc74       kube-scheduler-old-k8s-version-742860            kube-system
	
	
	==> coredns [fbbd45021314326d8ef46c26a084d16832861775c1e3e32409593901efb2be3e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47927 - 21497 "HINFO IN 283357864927804702.3965380075545912610. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.086054929s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-742860
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-742860
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=old-k8s-version-742860
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_40_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:40:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-742860
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:42:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:42:16 +0000   Wed, 17 Dec 2025 00:40:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:42:16 +0000   Wed, 17 Dec 2025 00:40:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:42:16 +0000   Wed, 17 Dec 2025 00:40:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 00:42:16 +0000   Wed, 17 Dec 2025 00:41:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-742860
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                adda18ce-4c65-4338-86bc-e27f9ae5140e
	  Boot ID:                    0e9cedc6-c46e-4354-b3d2-9272a8b33ae5
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 coredns-5dd5756b68-zsfnr                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     100s
	  kube-system                 etcd-old-k8s-version-742860                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-9sklv                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      100s
	  kube-system                 kube-apiserver-old-k8s-version-742860             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-old-k8s-version-742860    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-ltxr5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-scheduler-old-k8s-version-742860             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-hbb7l        0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-hl62s             0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 99s                kube-proxy       
	  Normal  Starting                 47s                kube-proxy       
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node old-k8s-version-742860 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node old-k8s-version-742860 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s               kubelet          Node old-k8s-version-742860 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           101s               node-controller  Node old-k8s-version-742860 event: Registered Node old-k8s-version-742860 in Controller
	  Normal  NodeReady                88s                kubelet          Node old-k8s-version-742860 status is now: NodeReady
	  Normal  Starting                 52s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  51s (x8 over 52s)  kubelet          Node old-k8s-version-742860 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s (x8 over 52s)  kubelet          Node old-k8s-version-742860 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x8 over 52s)  kubelet          Node old-k8s-version-742860 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           36s                node-controller  Node old-k8s-version-742860 event: Registered Node old-k8s-version-742860 in Controller
	
	
	==> dmesg <==
	[  +0.089382] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024236] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.864694] kauditd_printk_skb: 47 callbacks suppressed
	[Dec17 00:07] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.006904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +2.048755] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +4.030595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +8.447143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[ +16.382404] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000015] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[Dec17 00:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	
	
	==> etcd [adcc6538e3f24669f21be38c15820585a7a1c212e5fe02516c0874b1b88999cb] <==
	{"level":"info","ts":"2025-12-17T00:42:06.323495Z","caller":"traceutil/trace.go:171","msg":"trace[1337757081] linearizableReadLoop","detail":"{readStateIndex:608; appliedIndex:606; }","duration":"133.179621ms","start":"2025-12-17T00:42:06.190295Z","end":"2025-12-17T00:42:06.323474Z","steps":["trace[1337757081] 'read index received'  (duration: 96.287147ms)","trace[1337757081] 'applied index is now lower than readState.Index'  (duration: 36.891659ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T00:42:06.323612Z","caller":"traceutil/trace.go:171","msg":"trace[839948030] transaction","detail":"{read_only:false; response_revision:578; number_of_response:1; }","duration":"167.331723ms","start":"2025-12-17T00:42:06.156256Z","end":"2025-12-17T00:42:06.323588Z","steps":["trace[839948030] 'process raft request'  (duration: 130.241292ms)","trace[839948030] 'compare'  (duration: 36.694335ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T00:42:06.323791Z","caller":"traceutil/trace.go:171","msg":"trace[2042802556] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"165.845915ms","start":"2025-12-17T00:42:06.1579Z","end":"2025-12-17T00:42:06.323746Z","steps":["trace[2042802556] 'process raft request'  (duration: 165.500455ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T00:42:06.323792Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.510105ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-zsfnr\" ","response":"range_response_count:1 size:4991"}
	{"level":"info","ts":"2025-12-17T00:42:06.323858Z","caller":"traceutil/trace.go:171","msg":"trace[259088056] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-zsfnr; range_end:; response_count:1; response_revision:579; }","duration":"133.60205ms","start":"2025-12-17T00:42:06.190245Z","end":"2025-12-17T00:42:06.323847Z","steps":["trace[259088056] 'agreement among raft nodes before linearized reading'  (duration: 133.468445ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T00:42:06.727316Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"266.993668ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766884905226960 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l.1881d9e9560565d4\" mod_revision:572 > success:<request_put:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l.1881d9e9560565d4\" value_size:753 lease:6571766884905226362 >> failure:<request_range:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l.1881d9e9560565d4\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-17T00:42:06.727513Z","caller":"traceutil/trace.go:171","msg":"trace[1195386968] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"390.594576ms","start":"2025-12-17T00:42:06.336888Z","end":"2025-12-17T00:42:06.727483Z","steps":["trace[1195386968] 'process raft request'  (duration: 122.707954ms)","trace[1195386968] 'compare'  (duration: 266.908095ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T00:42:06.727631Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-17T00:42:06.336871Z","time spent":"390.700977ms","remote":"127.0.0.1:33744","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":868,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l.1881d9e9560565d4\" mod_revision:572 > success:<request_put:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l.1881d9e9560565d4\" value_size:753 lease:6571766884905226362 >> failure:<request_range:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l.1881d9e9560565d4\" > >"}
	{"level":"info","ts":"2025-12-17T00:42:17.085323Z","caller":"traceutil/trace.go:171","msg":"trace[762366583] transaction","detail":"{read_only:false; response_revision:595; number_of_response:1; }","duration":"224.347568ms","start":"2025-12-17T00:42:16.860954Z","end":"2025-12-17T00:42:17.085301Z","steps":["trace[762366583] 'process raft request'  (duration: 143.443207ms)","trace[762366583] 'compare'  (duration: 80.751426ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T00:42:17.085435Z","caller":"traceutil/trace.go:171","msg":"trace[1037565871] transaction","detail":"{read_only:false; response_revision:596; number_of_response:1; }","duration":"222.713763ms","start":"2025-12-17T00:42:16.862703Z","end":"2025-12-17T00:42:17.085417Z","steps":["trace[1037565871] 'process raft request'  (duration: 222.550897ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:42:17.21269Z","caller":"traceutil/trace.go:171","msg":"trace[1664020812] transaction","detail":"{read_only:false; response_revision:597; number_of_response:1; }","duration":"119.945544ms","start":"2025-12-17T00:42:17.092722Z","end":"2025-12-17T00:42:17.212667Z","steps":["trace[1664020812] 'process raft request'  (duration: 97.497048ms)","trace[1664020812] 'compare'  (duration: 22.079703ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T00:42:17.441167Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.881631ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766884905227059 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/storage-provisioner.1881d9e4f8f4f9d0\" mod_revision:480 > success:<request_put:<key:\"/registry/events/kube-system/storage-provisioner.1881d9e4f8f4f9d0\" value_size:676 lease:6571766884905226362 >> failure:<request_range:<key:\"/registry/events/kube-system/storage-provisioner.1881d9e4f8f4f9d0\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-17T00:42:17.441297Z","caller":"traceutil/trace.go:171","msg":"trace[30602521] transaction","detail":"{read_only:false; response_revision:600; number_of_response:1; }","duration":"128.570085ms","start":"2025-12-17T00:42:17.312704Z","end":"2025-12-17T00:42:17.441275Z","steps":["trace[30602521] 'compare'  (duration: 127.769217ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T00:42:17.441433Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.032527ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-old-k8s-version-742860\" ","response":"range_response_count:1 size:5233"}
	{"level":"info","ts":"2025-12-17T00:42:17.441459Z","caller":"traceutil/trace.go:171","msg":"trace[778675753] range","detail":"{range_begin:/registry/pods/kube-system/etcd-old-k8s-version-742860; range_end:; response_count:1; response_revision:600; }","duration":"127.061404ms","start":"2025-12-17T00:42:17.314391Z","end":"2025-12-17T00:42:17.441452Z","steps":["trace[778675753] 'agreement among raft nodes before linearized reading'  (duration: 126.950487ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:42:17.441324Z","caller":"traceutil/trace.go:171","msg":"trace[693293040] linearizableReadLoop","detail":"{readStateIndex:632; appliedIndex:631; }","duration":"126.895327ms","start":"2025-12-17T00:42:17.314418Z","end":"2025-12-17T00:42:17.441313Z","steps":["trace[693293040] 'read index received'  (duration: 21.846µs)","trace[693293040] 'applied index is now lower than readState.Index'  (duration: 126.872398ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T00:42:17.690688Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.694061ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766884905227063 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/storage-provisioner.1881d9e4f9af2dde\" mod_revision:481 > success:<request_put:<key:\"/registry/events/kube-system/storage-provisioner.1881d9e4f9af2dde\" value_size:676 lease:6571766884905226362 >> failure:<request_range:<key:\"/registry/events/kube-system/storage-provisioner.1881d9e4f9af2dde\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-17T00:42:17.690777Z","caller":"traceutil/trace.go:171","msg":"trace[1845861456] linearizableReadLoop","detail":"{readStateIndex:633; appliedIndex:632; }","duration":"244.170352ms","start":"2025-12-17T00:42:17.446595Z","end":"2025-12-17T00:42:17.690765Z","steps":["trace[1845861456] 'read index received'  (duration: 115.326744ms)","trace[1845861456] 'applied index is now lower than readState.Index'  (duration: 128.842478ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T00:42:17.690812Z","caller":"traceutil/trace.go:171","msg":"trace[474943943] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"245.189246ms","start":"2025-12-17T00:42:17.445604Z","end":"2025-12-17T00:42:17.690793Z","steps":["trace[474943943] 'process raft request'  (duration: 116.314586ms)","trace[474943943] 'compare'  (duration: 128.595914ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T00:42:17.690943Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"244.361762ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:8 size:41490"}
	{"level":"info","ts":"2025-12-17T00:42:17.690969Z","caller":"traceutil/trace.go:171","msg":"trace[1420928011] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:8; response_revision:601; }","duration":"244.390315ms","start":"2025-12-17T00:42:17.446571Z","end":"2025-12-17T00:42:17.690961Z","steps":["trace[1420928011] 'agreement among raft nodes before linearized reading'  (duration: 244.227687ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T00:42:17.942966Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.972531ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T00:42:17.943047Z","caller":"traceutil/trace.go:171","msg":"trace[2065532751] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:601; }","duration":"116.076662ms","start":"2025-12-17T00:42:17.826958Z","end":"2025-12-17T00:42:17.943035Z","steps":["trace[2065532751] 'range keys from in-memory index tree'  (duration: 115.858279ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T00:42:17.943029Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.340112ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-ltxr5\" ","response":"range_response_count:1 size:4429"}
	{"level":"info","ts":"2025-12-17T00:42:17.943174Z","caller":"traceutil/trace.go:171","msg":"trace[7598113] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-ltxr5; range_end:; response_count:1; response_revision:601; }","duration":"152.487217ms","start":"2025-12-17T00:42:17.790665Z","end":"2025-12-17T00:42:17.943152Z","steps":["trace[7598113] 'range keys from in-memory index tree'  (duration: 152.178474ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:42:33 up  1:25,  0 user,  load average: 3.52, 2.74, 1.87
	Linux old-k8s-version-742860 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1cf05ac31ba64f118b784ca8ad4b1a57919383d731f155b068c9565667ca62b7] <==
	I1217 00:41:46.610747       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 00:41:46.611209       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1217 00:41:46.611404       1 main.go:148] setting mtu 1500 for CNI 
	I1217 00:41:46.611462       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 00:41:46.611509       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T00:41:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 00:41:46.813910       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 00:41:46.814194       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 00:41:46.814217       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 00:41:46.814331       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 00:41:47.207492       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 00:41:47.207529       1 metrics.go:72] Registering metrics
	I1217 00:41:47.207592       1 controller.go:711] "Syncing nftables rules"
	I1217 00:41:56.817139       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 00:41:56.817183       1 main.go:301] handling current node
	I1217 00:42:06.814712       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 00:42:06.814763       1 main.go:301] handling current node
	I1217 00:42:16.813823       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 00:42:16.813903       1 main.go:301] handling current node
	I1217 00:42:26.816092       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 00:42:26.816122       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ee16447f516ad3ada79ab1622f36739e4a14d0598fdc1d80fa27279d2d0e2ad8] <==
	I1217 00:41:45.398305       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1217 00:41:45.455120       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 00:41:45.486112       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1217 00:41:45.486143       1 shared_informer.go:318] Caches are synced for configmaps
	I1217 00:41:45.486150       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 00:41:45.486296       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1217 00:41:45.486312       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1217 00:41:45.486314       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1217 00:41:45.487245       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1217 00:41:45.487275       1 aggregator.go:166] initial CRD sync complete...
	I1217 00:41:45.487282       1 autoregister_controller.go:141] Starting autoregister controller
	I1217 00:41:45.487287       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 00:41:45.487293       1 cache.go:39] Caches are synced for autoregister controller
	I1217 00:41:45.526772       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1217 00:41:46.390543       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 00:41:46.507761       1 controller.go:624] quota admission added evaluator for: namespaces
	I1217 00:41:46.541123       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1217 00:41:46.561352       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 00:41:46.570506       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 00:41:46.579187       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1217 00:41:46.624822       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.70.2"}
	I1217 00:41:46.640679       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.165.81"}
	I1217 00:41:57.658403       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1217 00:41:57.806464       1 controller.go:624] quota admission added evaluator for: endpoints
	I1217 00:41:57.957657       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0051bcc55466b549d043d19c7acbc02084dfafcf4a1b9fd1b4704776608fde49] <==
	I1217 00:41:57.807738       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1217 00:41:57.807750       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1217 00:41:57.807760       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1217 00:41:57.833521       1 shared_informer.go:318] Caches are synced for taint
	I1217 00:41:57.833658       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1217 00:41:57.833705       1 taint_manager.go:211] "Sending events to api server"
	I1217 00:41:57.833751       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1217 00:41:57.833838       1 event.go:307] "Event occurred" object="old-k8s-version-742860" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-742860 event: Registered Node old-k8s-version-742860 in Controller"
	I1217 00:41:57.833873       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="old-k8s-version-742860"
	I1217 00:41:57.833941       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1217 00:41:57.838283       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1217 00:41:57.855414       1 shared_informer.go:318] Caches are synced for GC
	I1217 00:41:57.857776       1 shared_informer.go:318] Caches are synced for daemon sets
	I1217 00:41:58.184774       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 00:41:58.243660       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 00:41:58.243695       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1217 00:42:03.122171       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.321492ms"
	I1217 00:42:03.122609       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="80.2µs"
	I1217 00:42:06.153948       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.334µs"
	I1217 00:42:07.189770       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.919µs"
	I1217 00:42:08.112453       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="186.92µs"
	I1217 00:42:17.087360       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="229.166651ms"
	I1217 00:42:17.087483       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.777µs"
	I1217 00:42:23.154969       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.953µs"
	I1217 00:42:28.015263       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="107.308µs"
	
	
	==> kube-proxy [98e5f84dacacedac773b65d9a13392572f7924b62854fb99ecd793603a8f1d34] <==
	I1217 00:41:46.427068       1 server_others.go:69] "Using iptables proxy"
	I1217 00:41:46.439338       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1217 00:41:46.459891       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 00:41:46.462451       1 server_others.go:152] "Using iptables Proxier"
	I1217 00:41:46.462480       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1217 00:41:46.462487       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1217 00:41:46.462517       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1217 00:41:46.462798       1 server.go:846] "Version info" version="v1.28.0"
	I1217 00:41:46.462859       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:41:46.463606       1 config.go:315] "Starting node config controller"
	I1217 00:41:46.463678       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1217 00:41:46.464791       1 config.go:188] "Starting service config controller"
	I1217 00:41:46.464949       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1217 00:41:46.465073       1 config.go:97] "Starting endpoint slice config controller"
	I1217 00:41:46.465337       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1217 00:41:46.564631       1 shared_informer.go:318] Caches are synced for node config
	I1217 00:41:46.565859       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1217 00:41:46.565886       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [d2bacdc7b5ee7149039abbb534298bb0d1c50567e36970b8dde0a69f80ccd23c] <==
	I1217 00:41:43.139435       1 serving.go:348] Generated self-signed cert in-memory
	W1217 00:41:45.433417       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 00:41:45.433470       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 00:41:45.433484       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 00:41:45.433494       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 00:41:45.456075       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1217 00:41:45.456116       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:41:45.460155       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:41:45.460194       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1217 00:41:45.460437       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1217 00:41:45.460499       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1217 00:41:45.560642       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 17 00:41:57 old-k8s-version-742860 kubelet[727]: I1217 00:41:57.700298     727 topology_manager.go:215] "Topology Admit Handler" podUID="ea6229a9-6cc8-4a75-a422-59e0f08b134d" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-hl62s"
	Dec 17 00:41:57 old-k8s-version-742860 kubelet[727]: I1217 00:41:57.808809     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7becd935-9668-4f00-b6fd-b0fd758c3d67-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-hbb7l\" (UID: \"7becd935-9668-4f00-b6fd-b0fd758c3d67\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l"
	Dec 17 00:41:57 old-k8s-version-742860 kubelet[727]: I1217 00:41:57.808855     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzwmj\" (UniqueName: \"kubernetes.io/projected/7becd935-9668-4f00-b6fd-b0fd758c3d67-kube-api-access-dzwmj\") pod \"dashboard-metrics-scraper-5f989dc9cf-hbb7l\" (UID: \"7becd935-9668-4f00-b6fd-b0fd758c3d67\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l"
	Dec 17 00:41:57 old-k8s-version-742860 kubelet[727]: I1217 00:41:57.808882     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8xzg\" (UniqueName: \"kubernetes.io/projected/ea6229a9-6cc8-4a75-a422-59e0f08b134d-kube-api-access-z8xzg\") pod \"kubernetes-dashboard-8694d4445c-hl62s\" (UID: \"ea6229a9-6cc8-4a75-a422-59e0f08b134d\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-hl62s"
	Dec 17 00:41:57 old-k8s-version-742860 kubelet[727]: I1217 00:41:57.808912     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ea6229a9-6cc8-4a75-a422-59e0f08b134d-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-hl62s\" (UID: \"ea6229a9-6cc8-4a75-a422-59e0f08b134d\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-hl62s"
	Dec 17 00:42:06 old-k8s-version-742860 kubelet[727]: I1217 00:42:06.093315     727 scope.go:117] "RemoveContainer" containerID="4d4e827663dd03284a764ac2f5372619101f2e614e72c6ba517394082e67c65c"
	Dec 17 00:42:06 old-k8s-version-742860 kubelet[727]: I1217 00:42:06.156053     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-hl62s" podStartSLOduration=4.984301428 podCreationTimestamp="2025-12-17 00:41:57 +0000 UTC" firstStartedPulling="2025-12-17 00:41:58.032096628 +0000 UTC m=+16.133473131" lastFinishedPulling="2025-12-17 00:42:02.20372331 +0000 UTC m=+20.305099818" observedRunningTime="2025-12-17 00:42:03.110583698 +0000 UTC m=+21.211960226" watchObservedRunningTime="2025-12-17 00:42:06.155928115 +0000 UTC m=+24.257304623"
	Dec 17 00:42:07 old-k8s-version-742860 kubelet[727]: I1217 00:42:07.098043     727 scope.go:117] "RemoveContainer" containerID="4d4e827663dd03284a764ac2f5372619101f2e614e72c6ba517394082e67c65c"
	Dec 17 00:42:07 old-k8s-version-742860 kubelet[727]: I1217 00:42:07.098238     727 scope.go:117] "RemoveContainer" containerID="324ad454b6f2d48651270a786dfd3cf704b868b70b7f2becfd1846029681442c"
	Dec 17 00:42:07 old-k8s-version-742860 kubelet[727]: E1217 00:42:07.098627     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hbb7l_kubernetes-dashboard(7becd935-9668-4f00-b6fd-b0fd758c3d67)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l" podUID="7becd935-9668-4f00-b6fd-b0fd758c3d67"
	Dec 17 00:42:08 old-k8s-version-742860 kubelet[727]: I1217 00:42:08.102251     727 scope.go:117] "RemoveContainer" containerID="324ad454b6f2d48651270a786dfd3cf704b868b70b7f2becfd1846029681442c"
	Dec 17 00:42:08 old-k8s-version-742860 kubelet[727]: E1217 00:42:08.102600     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hbb7l_kubernetes-dashboard(7becd935-9668-4f00-b6fd-b0fd758c3d67)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l" podUID="7becd935-9668-4f00-b6fd-b0fd758c3d67"
	Dec 17 00:42:09 old-k8s-version-742860 kubelet[727]: I1217 00:42:09.103910     727 scope.go:117] "RemoveContainer" containerID="324ad454b6f2d48651270a786dfd3cf704b868b70b7f2becfd1846029681442c"
	Dec 17 00:42:09 old-k8s-version-742860 kubelet[727]: E1217 00:42:09.104226     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hbb7l_kubernetes-dashboard(7becd935-9668-4f00-b6fd-b0fd758c3d67)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l" podUID="7becd935-9668-4f00-b6fd-b0fd758c3d67"
	Dec 17 00:42:17 old-k8s-version-742860 kubelet[727]: I1217 00:42:17.122637     727 scope.go:117] "RemoveContainer" containerID="2665e23f8c1d4b1b60afb71c02d698261aab64c7615ff9ebd12d544814363589"
	Dec 17 00:42:23 old-k8s-version-742860 kubelet[727]: I1217 00:42:23.007069     727 scope.go:117] "RemoveContainer" containerID="324ad454b6f2d48651270a786dfd3cf704b868b70b7f2becfd1846029681442c"
	Dec 17 00:42:23 old-k8s-version-742860 kubelet[727]: I1217 00:42:23.142152     727 scope.go:117] "RemoveContainer" containerID="324ad454b6f2d48651270a786dfd3cf704b868b70b7f2becfd1846029681442c"
	Dec 17 00:42:23 old-k8s-version-742860 kubelet[727]: I1217 00:42:23.142426     727 scope.go:117] "RemoveContainer" containerID="c2f5e2e55fdb212b11fe534765cef6051904899119c3b0f0d2895cdc1bad1d6c"
	Dec 17 00:42:23 old-k8s-version-742860 kubelet[727]: E1217 00:42:23.142799     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hbb7l_kubernetes-dashboard(7becd935-9668-4f00-b6fd-b0fd758c3d67)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l" podUID="7becd935-9668-4f00-b6fd-b0fd758c3d67"
	Dec 17 00:42:28 old-k8s-version-742860 kubelet[727]: I1217 00:42:28.005472     727 scope.go:117] "RemoveContainer" containerID="c2f5e2e55fdb212b11fe534765cef6051904899119c3b0f0d2895cdc1bad1d6c"
	Dec 17 00:42:28 old-k8s-version-742860 kubelet[727]: E1217 00:42:28.005831     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hbb7l_kubernetes-dashboard(7becd935-9668-4f00-b6fd-b0fd758c3d67)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l" podUID="7becd935-9668-4f00-b6fd-b0fd758c3d67"
	Dec 17 00:42:30 old-k8s-version-742860 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 00:42:31 old-k8s-version-742860 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 00:42:31 old-k8s-version-742860 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:42:31 old-k8s-version-742860 systemd[1]: kubelet.service: Consumed 1.447s CPU time.
	
	
	==> kubernetes-dashboard [7bfc386107bbed22f46f9153e98395f1b89a75e043668ed01443b61246824c81] <==
	2025/12/17 00:42:02 Using namespace: kubernetes-dashboard
	2025/12/17 00:42:02 Using in-cluster config to connect to apiserver
	2025/12/17 00:42:02 Using secret token for csrf signing
	2025/12/17 00:42:02 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 00:42:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 00:42:02 Successful initial request to the apiserver, version: v1.28.0
	2025/12/17 00:42:02 Generating JWE encryption key
	2025/12/17 00:42:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 00:42:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 00:42:02 Initializing JWE encryption key from synchronized object
	2025/12/17 00:42:02 Creating in-cluster Sidecar client
	2025/12/17 00:42:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 00:42:02 Serving insecurely on HTTP port: 9090
	2025/12/17 00:42:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 00:42:02 Starting overwatch
	
	
	==> storage-provisioner [2665e23f8c1d4b1b60afb71c02d698261aab64c7615ff9ebd12d544814363589] <==
	I1217 00:41:46.380551       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 00:42:16.383592       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a4b60651ffd030d19b761fd3c47918c5ebbe75733ceae5c5e50c3e69b44beebb] <==
	I1217 00:42:18.311917       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 00:42:18.321088       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 00:42:18.321216       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-742860 -n old-k8s-version-742860
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-742860 -n old-k8s-version-742860: exit status 2 (394.646852ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-742860 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-742860
helpers_test.go:244: (dbg) docker inspect old-k8s-version-742860:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f3317a25ba0dd672e7c7b2056cadfb4682b7ff2475d42648d9662ef39b8f59b",
	        "Created": "2025-12-17T00:40:24.632786552Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 275022,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:41:35.880122954Z",
	            "FinishedAt": "2025-12-17T00:41:34.100458139Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/5f3317a25ba0dd672e7c7b2056cadfb4682b7ff2475d42648d9662ef39b8f59b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f3317a25ba0dd672e7c7b2056cadfb4682b7ff2475d42648d9662ef39b8f59b/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f3317a25ba0dd672e7c7b2056cadfb4682b7ff2475d42648d9662ef39b8f59b/hosts",
	        "LogPath": "/var/lib/docker/containers/5f3317a25ba0dd672e7c7b2056cadfb4682b7ff2475d42648d9662ef39b8f59b/5f3317a25ba0dd672e7c7b2056cadfb4682b7ff2475d42648d9662ef39b8f59b-json.log",
	        "Name": "/old-k8s-version-742860",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-742860:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-742860",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5f3317a25ba0dd672e7c7b2056cadfb4682b7ff2475d42648d9662ef39b8f59b",
	                "LowerDir": "/var/lib/docker/overlay2/b3872e7dcb375ce53f1001878e7871d4e0b55db5e9e018b728e1b163a393d733-init/diff:/var/lib/docker/overlay2/594b812fd6d8db89dab322ea9e00d43dd555e9709fb5e6953e3873cce717392c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3872e7dcb375ce53f1001878e7871d4e0b55db5e9e018b728e1b163a393d733/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3872e7dcb375ce53f1001878e7871d4e0b55db5e9e018b728e1b163a393d733/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3872e7dcb375ce53f1001878e7871d4e0b55db5e9e018b728e1b163a393d733/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-742860",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-742860/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-742860",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-742860",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-742860",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6924cecc87f284162ce88652370c5d238e3c8cb993429b76187c7aebf689f686",
	            "SandboxKey": "/var/run/docker/netns/6924cecc87f2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-742860": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "831a77a99d636c5f3163f99f25c807a931c002c29f68db2779eee3263784692b",
	                    "EndpointID": "88836b531f1124000066a35b25dfa76f960ba272df5bd09ed9a1b58a3921ea53",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ba:b9:f3:5b:18:51",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-742860",
	                        "5f3317a25ba0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-742860 -n old-k8s-version-742860
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-742860 -n old-k8s-version-742860: exit status 2 (351.334677ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-742860 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-742860 logs -n 25: (1.194464407s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-options-636512 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-636512          │ jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh     │ cert-options-636512 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-636512          │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh     │ -p cert-options-636512 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-636512          │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ delete  │ -p cert-options-636512                                                                                                                                                                                                                        │ cert-options-636512          │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ start   │ -p old-k8s-version-742860 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:41 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-742860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │                     │
	│ stop    │ -p old-k8s-version-742860 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:41 UTC │
	│ delete  │ -p stopped-upgrade-028618                                                                                                                                                                                                                     │ stopped-upgrade-028618       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:41 UTC │
	│ start   │ -p no-preload-864613 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-742860 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:41 UTC │
	│ start   │ -p old-k8s-version-742860 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p cert-expiration-753607 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-753607       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:41 UTC │
	│ delete  │ -p cert-expiration-753607                                                                                                                                                                                                                     │ cert-expiration-753607       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p embed-certs-153232 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ start   │ -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-803959    │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ start   │ -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                               │ kubernetes-upgrade-803959    │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ delete  │ -p kubernetes-upgrade-803959                                                                                                                                                                                                                  │ kubernetes-upgrade-803959    │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ delete  │ -p disable-driver-mounts-827138                                                                                                                                                                                                               │ disable-driver-mounts-827138 │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p default-k8s-diff-port-414413 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-864613 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ stop    │ -p no-preload-864613 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ image   │ old-k8s-version-742860 image list --format=json                                                                                                                                                                                               │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ pause   │ -p old-k8s-version-742860 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-864613 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p no-preload-864613 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:42:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:42:34.764752  290128 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:42:34.764957  290128 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:42:34.764966  290128 out.go:374] Setting ErrFile to fd 2...
	I1217 00:42:34.764971  290128 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:42:34.765186  290128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:42:34.765607  290128 out.go:368] Setting JSON to false
	I1217 00:42:34.766739  290128 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5105,"bootTime":1765927050,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:42:34.766800  290128 start.go:143] virtualization: kvm guest
	I1217 00:42:34.768521  290128 out.go:179] * [no-preload-864613] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:42:34.770218  290128 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:42:34.770219  290128 notify.go:221] Checking for updates...
	I1217 00:42:34.772552  290128 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:42:34.773718  290128 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:42:34.775111  290128 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:42:34.778189  290128 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:42:34.779590  290128 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:42:34.781260  290128 config.go:182] Loaded profile config "no-preload-864613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:42:34.781927  290128 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:42:34.810090  290128 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:42:34.810186  290128 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:42:34.867563  290128 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-17 00:42:34.85663635 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:42:34.867662  290128 docker.go:319] overlay module found
	I1217 00:42:34.869118  290128 out.go:179] * Using the docker driver based on existing profile
	I1217 00:42:34.870719  284412 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.50164644s
	I1217 00:42:34.890591  284412 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 00:42:34.902596  284412 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 00:42:34.913324  284412 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 00:42:34.913741  284412 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-414413 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 00:42:34.926204  284412 kubeadm.go:319] [bootstrap-token] Using token: ozsnc6.t1bk90aflvlqyzxz
	I1217 00:42:34.870545  290128 start.go:309] selected driver: docker
	I1217 00:42:34.870562  290128 start.go:927] validating driver "docker" against &{Name:no-preload-864613 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-864613 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:42:34.870662  290128 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:42:34.871449  290128 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:42:34.938867  290128 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-17 00:42:34.927217793 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:42:34.939271  290128 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:42:34.939307  290128 cni.go:84] Creating CNI manager for ""
	I1217 00:42:34.939373  290128 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:42:34.939428  290128 start.go:353] cluster config:
	{Name:no-preload-864613 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-864613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:42:34.942443  290128 out.go:179] * Starting "no-preload-864613" primary control-plane node in "no-preload-864613" cluster
	I1217 00:42:34.943907  290128 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:42:34.945076  290128 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	
	
	==> CRI-O <==
	Dec 17 00:42:06 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:06.336153207Z" level=info msg="Started container" PID=1728 containerID=324ad454b6f2d48651270a786dfd3cf704b868b70b7f2becfd1846029681442c description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l/dashboard-metrics-scraper id=fbc2917e-e10b-49ce-a2a2-5d30513d0add name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a3cfac6985eeef058d7ca64769e8a0c4845aeed1d56a82c91b017277e982f51
	Dec 17 00:42:07 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:07.182087494Z" level=info msg="Removing container: 4d4e827663dd03284a764ac2f5372619101f2e614e72c6ba517394082e67c65c" id=c06753bf-68f9-4bb7-8444-e61daa6fa874 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:42:07 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:07.202088561Z" level=info msg="Removed container 4d4e827663dd03284a764ac2f5372619101f2e614e72c6ba517394082e67c65c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l/dashboard-metrics-scraper" id=c06753bf-68f9-4bb7-8444-e61daa6fa874 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:42:17 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:17.123123739Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4b990ae4-c208-4fa0-acef-32e723ce7e4e name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:42:17 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:17.124099048Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=23563d03-7cd9-42e5-af03-39eee936dc2c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:42:17 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:17.152904487Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d724efa8-b33b-4615-928a-9b1450923521 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:42:17 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:17.153060316Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:17 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:17.213324004Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:17 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:17.213536791Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2e9ae91bafd056c4b78af48bb26353eec43bd54356f0718859ccefe177147b62/merged/etc/passwd: no such file or directory"
	Dec 17 00:42:17 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:17.213565143Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2e9ae91bafd056c4b78af48bb26353eec43bd54356f0718859ccefe177147b62/merged/etc/group: no such file or directory"
	Dec 17 00:42:17 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:17.213875854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:17 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:17.308156301Z" level=info msg="Created container a4b60651ffd030d19b761fd3c47918c5ebbe75733ceae5c5e50c3e69b44beebb: kube-system/storage-provisioner/storage-provisioner" id=d724efa8-b33b-4615-928a-9b1450923521 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:42:17 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:17.308872007Z" level=info msg="Starting container: a4b60651ffd030d19b761fd3c47918c5ebbe75733ceae5c5e50c3e69b44beebb" id=4bb73f62-e5bf-4c0b-b307-9635f445bc74 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:42:17 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:17.311306225Z" level=info msg="Started container" PID=1749 containerID=a4b60651ffd030d19b761fd3c47918c5ebbe75733ceae5c5e50c3e69b44beebb description=kube-system/storage-provisioner/storage-provisioner id=4bb73f62-e5bf-4c0b-b307-9635f445bc74 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c278dcf4f5cfb7b95abffae9c794456e1ad173ca8dfc893bd73c9bf4841c6dba
	Dec 17 00:42:23 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:23.007655927Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c49a7e86-5671-405a-a3d6-47980d6456b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:42:23 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:23.008606496Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f7e0ae95-c548-4ae3-99e2-5540fbdcedbe name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:42:23 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:23.009741431Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l/dashboard-metrics-scraper" id=ecfb2c4b-dae6-49f3-97b1-7ff81ebc4aab name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:42:23 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:23.009860398Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:23 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:23.016038459Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:23 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:23.016454842Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:23 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:23.042190667Z" level=info msg="Created container c2f5e2e55fdb212b11fe534765cef6051904899119c3b0f0d2895cdc1bad1d6c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l/dashboard-metrics-scraper" id=ecfb2c4b-dae6-49f3-97b1-7ff81ebc4aab name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:42:23 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:23.042730485Z" level=info msg="Starting container: c2f5e2e55fdb212b11fe534765cef6051904899119c3b0f0d2895cdc1bad1d6c" id=4c511198-2d0a-4e86-a9ad-fdb67b92f828 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:42:23 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:23.044361397Z" level=info msg="Started container" PID=1781 containerID=c2f5e2e55fdb212b11fe534765cef6051904899119c3b0f0d2895cdc1bad1d6c description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l/dashboard-metrics-scraper id=4c511198-2d0a-4e86-a9ad-fdb67b92f828 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a3cfac6985eeef058d7ca64769e8a0c4845aeed1d56a82c91b017277e982f51
	Dec 17 00:42:23 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:23.143918989Z" level=info msg="Removing container: 324ad454b6f2d48651270a786dfd3cf704b868b70b7f2becfd1846029681442c" id=df405ba9-53c1-4115-a067-3a022f7e7aaa name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:42:23 old-k8s-version-742860 crio[566]: time="2025-12-17T00:42:23.154135202Z" level=info msg="Removed container 324ad454b6f2d48651270a786dfd3cf704b868b70b7f2becfd1846029681442c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l/dashboard-metrics-scraper" id=df405ba9-53c1-4115-a067-3a022f7e7aaa name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	c2f5e2e55fdb2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago      Exited              dashboard-metrics-scraper   2                   3a3cfac6985ee       dashboard-metrics-scraper-5f989dc9cf-hbb7l       kubernetes-dashboard
	a4b60651ffd03       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   c278dcf4f5cfb       storage-provisioner                              kube-system
	7bfc386107bbe       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   33 seconds ago      Running             kubernetes-dashboard        0                   3ba73abd27b0d       kubernetes-dashboard-8694d4445c-hl62s            kubernetes-dashboard
	9af078d322a28       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   c208b89a4b136       busybox                                          default
	fbbd450213143       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           49 seconds ago      Running             coredns                     0                   ddbd1dea0f2dc       coredns-5dd5756b68-zsfnr                         kube-system
	98e5f84dacace       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           49 seconds ago      Running             kube-proxy                  0                   adce65bacc882       kube-proxy-ltxr5                                 kube-system
	1cf05ac31ba64       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   3494d5997013d       kindnet-9sklv                                    kube-system
	2665e23f8c1d4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   c278dcf4f5cfb       storage-provisioner                              kube-system
	adcc6538e3f24       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           53 seconds ago      Running             etcd                        0                   4542710d8ef9a       etcd-old-k8s-version-742860                      kube-system
	ee16447f516ad       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           53 seconds ago      Running             kube-apiserver              0                   f913bcfddc2ba       kube-apiserver-old-k8s-version-742860            kube-system
	0051bcc55466b       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           53 seconds ago      Running             kube-controller-manager     0                   b5cb735fa0175       kube-controller-manager-old-k8s-version-742860   kube-system
	d2bacdc7b5ee7       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           53 seconds ago      Running             kube-scheduler              0                   57cca05a9bc74       kube-scheduler-old-k8s-version-742860            kube-system
	
	
	==> coredns [fbbd45021314326d8ef46c26a084d16832861775c1e3e32409593901efb2be3e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47927 - 21497 "HINFO IN 283357864927804702.3965380075545912610. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.086054929s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-742860
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-742860
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=old-k8s-version-742860
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_40_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:40:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-742860
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:42:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:42:16 +0000   Wed, 17 Dec 2025 00:40:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:42:16 +0000   Wed, 17 Dec 2025 00:40:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:42:16 +0000   Wed, 17 Dec 2025 00:40:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 00:42:16 +0000   Wed, 17 Dec 2025 00:41:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-742860
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                adda18ce-4c65-4338-86bc-e27f9ae5140e
	  Boot ID:                    0e9cedc6-c46e-4354-b3d2-9272a8b33ae5
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-5dd5756b68-zsfnr                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     102s
	  kube-system                 etcd-old-k8s-version-742860                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-9sklv                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      102s
	  kube-system                 kube-apiserver-old-k8s-version-742860             250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-old-k8s-version-742860    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-ltxr5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-old-k8s-version-742860             100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-hbb7l        0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-hl62s             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 102s               kube-proxy       
	  Normal  Starting                 49s                kube-proxy       
	  Normal  NodeHasSufficientMemory  116s               kubelet          Node old-k8s-version-742860 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s               kubelet          Node old-k8s-version-742860 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s               kubelet          Node old-k8s-version-742860 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           103s               node-controller  Node old-k8s-version-742860 event: Registered Node old-k8s-version-742860 in Controller
	  Normal  NodeReady                90s                kubelet          Node old-k8s-version-742860 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 54s)  kubelet          Node old-k8s-version-742860 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 54s)  kubelet          Node old-k8s-version-742860 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 54s)  kubelet          Node old-k8s-version-742860 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                node-controller  Node old-k8s-version-742860 event: Registered Node old-k8s-version-742860 in Controller
	
	
	==> dmesg <==
	[  +0.089382] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024236] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.864694] kauditd_printk_skb: 47 callbacks suppressed
	[Dec17 00:07] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.006904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +2.048755] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +4.030595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +8.447143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[ +16.382404] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000015] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[Dec17 00:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	
	
	==> etcd [adcc6538e3f24669f21be38c15820585a7a1c212e5fe02516c0874b1b88999cb] <==
	{"level":"info","ts":"2025-12-17T00:42:06.323495Z","caller":"traceutil/trace.go:171","msg":"trace[1337757081] linearizableReadLoop","detail":"{readStateIndex:608; appliedIndex:606; }","duration":"133.179621ms","start":"2025-12-17T00:42:06.190295Z","end":"2025-12-17T00:42:06.323474Z","steps":["trace[1337757081] 'read index received'  (duration: 96.287147ms)","trace[1337757081] 'applied index is now lower than readState.Index'  (duration: 36.891659ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T00:42:06.323612Z","caller":"traceutil/trace.go:171","msg":"trace[839948030] transaction","detail":"{read_only:false; response_revision:578; number_of_response:1; }","duration":"167.331723ms","start":"2025-12-17T00:42:06.156256Z","end":"2025-12-17T00:42:06.323588Z","steps":["trace[839948030] 'process raft request'  (duration: 130.241292ms)","trace[839948030] 'compare'  (duration: 36.694335ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T00:42:06.323791Z","caller":"traceutil/trace.go:171","msg":"trace[2042802556] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"165.845915ms","start":"2025-12-17T00:42:06.1579Z","end":"2025-12-17T00:42:06.323746Z","steps":["trace[2042802556] 'process raft request'  (duration: 165.500455ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T00:42:06.323792Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.510105ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-zsfnr\" ","response":"range_response_count:1 size:4991"}
	{"level":"info","ts":"2025-12-17T00:42:06.323858Z","caller":"traceutil/trace.go:171","msg":"trace[259088056] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-zsfnr; range_end:; response_count:1; response_revision:579; }","duration":"133.60205ms","start":"2025-12-17T00:42:06.190245Z","end":"2025-12-17T00:42:06.323847Z","steps":["trace[259088056] 'agreement among raft nodes before linearized reading'  (duration: 133.468445ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T00:42:06.727316Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"266.993668ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766884905226960 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l.1881d9e9560565d4\" mod_revision:572 > success:<request_put:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l.1881d9e9560565d4\" value_size:753 lease:6571766884905226362 >> failure:<request_range:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l.1881d9e9560565d4\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-17T00:42:06.727513Z","caller":"traceutil/trace.go:171","msg":"trace[1195386968] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"390.594576ms","start":"2025-12-17T00:42:06.336888Z","end":"2025-12-17T00:42:06.727483Z","steps":["trace[1195386968] 'process raft request'  (duration: 122.707954ms)","trace[1195386968] 'compare'  (duration: 266.908095ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T00:42:06.727631Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-17T00:42:06.336871Z","time spent":"390.700977ms","remote":"127.0.0.1:33744","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":868,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l.1881d9e9560565d4\" mod_revision:572 > success:<request_put:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l.1881d9e9560565d4\" value_size:753 lease:6571766884905226362 >> failure:<request_range:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l.1881d9e9560565d4\" > >"}
	{"level":"info","ts":"2025-12-17T00:42:17.085323Z","caller":"traceutil/trace.go:171","msg":"trace[762366583] transaction","detail":"{read_only:false; response_revision:595; number_of_response:1; }","duration":"224.347568ms","start":"2025-12-17T00:42:16.860954Z","end":"2025-12-17T00:42:17.085301Z","steps":["trace[762366583] 'process raft request'  (duration: 143.443207ms)","trace[762366583] 'compare'  (duration: 80.751426ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T00:42:17.085435Z","caller":"traceutil/trace.go:171","msg":"trace[1037565871] transaction","detail":"{read_only:false; response_revision:596; number_of_response:1; }","duration":"222.713763ms","start":"2025-12-17T00:42:16.862703Z","end":"2025-12-17T00:42:17.085417Z","steps":["trace[1037565871] 'process raft request'  (duration: 222.550897ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:42:17.21269Z","caller":"traceutil/trace.go:171","msg":"trace[1664020812] transaction","detail":"{read_only:false; response_revision:597; number_of_response:1; }","duration":"119.945544ms","start":"2025-12-17T00:42:17.092722Z","end":"2025-12-17T00:42:17.212667Z","steps":["trace[1664020812] 'process raft request'  (duration: 97.497048ms)","trace[1664020812] 'compare'  (duration: 22.079703ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T00:42:17.441167Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.881631ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766884905227059 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/storage-provisioner.1881d9e4f8f4f9d0\" mod_revision:480 > success:<request_put:<key:\"/registry/events/kube-system/storage-provisioner.1881d9e4f8f4f9d0\" value_size:676 lease:6571766884905226362 >> failure:<request_range:<key:\"/registry/events/kube-system/storage-provisioner.1881d9e4f8f4f9d0\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-17T00:42:17.441297Z","caller":"traceutil/trace.go:171","msg":"trace[30602521] transaction","detail":"{read_only:false; response_revision:600; number_of_response:1; }","duration":"128.570085ms","start":"2025-12-17T00:42:17.312704Z","end":"2025-12-17T00:42:17.441275Z","steps":["trace[30602521] 'compare'  (duration: 127.769217ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T00:42:17.441433Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.032527ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-old-k8s-version-742860\" ","response":"range_response_count:1 size:5233"}
	{"level":"info","ts":"2025-12-17T00:42:17.441459Z","caller":"traceutil/trace.go:171","msg":"trace[778675753] range","detail":"{range_begin:/registry/pods/kube-system/etcd-old-k8s-version-742860; range_end:; response_count:1; response_revision:600; }","duration":"127.061404ms","start":"2025-12-17T00:42:17.314391Z","end":"2025-12-17T00:42:17.441452Z","steps":["trace[778675753] 'agreement among raft nodes before linearized reading'  (duration: 126.950487ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:42:17.441324Z","caller":"traceutil/trace.go:171","msg":"trace[693293040] linearizableReadLoop","detail":"{readStateIndex:632; appliedIndex:631; }","duration":"126.895327ms","start":"2025-12-17T00:42:17.314418Z","end":"2025-12-17T00:42:17.441313Z","steps":["trace[693293040] 'read index received'  (duration: 21.846µs)","trace[693293040] 'applied index is now lower than readState.Index'  (duration: 126.872398ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T00:42:17.690688Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.694061ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766884905227063 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/storage-provisioner.1881d9e4f9af2dde\" mod_revision:481 > success:<request_put:<key:\"/registry/events/kube-system/storage-provisioner.1881d9e4f9af2dde\" value_size:676 lease:6571766884905226362 >> failure:<request_range:<key:\"/registry/events/kube-system/storage-provisioner.1881d9e4f9af2dde\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-17T00:42:17.690777Z","caller":"traceutil/trace.go:171","msg":"trace[1845861456] linearizableReadLoop","detail":"{readStateIndex:633; appliedIndex:632; }","duration":"244.170352ms","start":"2025-12-17T00:42:17.446595Z","end":"2025-12-17T00:42:17.690765Z","steps":["trace[1845861456] 'read index received'  (duration: 115.326744ms)","trace[1845861456] 'applied index is now lower than readState.Index'  (duration: 128.842478ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T00:42:17.690812Z","caller":"traceutil/trace.go:171","msg":"trace[474943943] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"245.189246ms","start":"2025-12-17T00:42:17.445604Z","end":"2025-12-17T00:42:17.690793Z","steps":["trace[474943943] 'process raft request'  (duration: 116.314586ms)","trace[474943943] 'compare'  (duration: 128.595914ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T00:42:17.690943Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"244.361762ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:8 size:41490"}
	{"level":"info","ts":"2025-12-17T00:42:17.690969Z","caller":"traceutil/trace.go:171","msg":"trace[1420928011] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:8; response_revision:601; }","duration":"244.390315ms","start":"2025-12-17T00:42:17.446571Z","end":"2025-12-17T00:42:17.690961Z","steps":["trace[1420928011] 'agreement among raft nodes before linearized reading'  (duration: 244.227687ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T00:42:17.942966Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.972531ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T00:42:17.943047Z","caller":"traceutil/trace.go:171","msg":"trace[2065532751] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:601; }","duration":"116.076662ms","start":"2025-12-17T00:42:17.826958Z","end":"2025-12-17T00:42:17.943035Z","steps":["trace[2065532751] 'range keys from in-memory index tree'  (duration: 115.858279ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T00:42:17.943029Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.340112ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-ltxr5\" ","response":"range_response_count:1 size:4429"}
	{"level":"info","ts":"2025-12-17T00:42:17.943174Z","caller":"traceutil/trace.go:171","msg":"trace[7598113] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-ltxr5; range_end:; response_count:1; response_revision:601; }","duration":"152.487217ms","start":"2025-12-17T00:42:17.790665Z","end":"2025-12-17T00:42:17.943152Z","steps":["trace[7598113] 'range keys from in-memory index tree'  (duration: 152.178474ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:42:35 up  1:25,  0 user,  load average: 3.52, 2.74, 1.87
	Linux old-k8s-version-742860 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1cf05ac31ba64f118b784ca8ad4b1a57919383d731f155b068c9565667ca62b7] <==
	I1217 00:41:46.610747       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 00:41:46.611209       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1217 00:41:46.611404       1 main.go:148] setting mtu 1500 for CNI 
	I1217 00:41:46.611462       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 00:41:46.611509       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T00:41:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 00:41:46.813910       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 00:41:46.814194       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 00:41:46.814217       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 00:41:46.814331       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 00:41:47.207492       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 00:41:47.207529       1 metrics.go:72] Registering metrics
	I1217 00:41:47.207592       1 controller.go:711] "Syncing nftables rules"
	I1217 00:41:56.817139       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 00:41:56.817183       1 main.go:301] handling current node
	I1217 00:42:06.814712       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 00:42:06.814763       1 main.go:301] handling current node
	I1217 00:42:16.813823       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 00:42:16.813903       1 main.go:301] handling current node
	I1217 00:42:26.816092       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 00:42:26.816122       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ee16447f516ad3ada79ab1622f36739e4a14d0598fdc1d80fa27279d2d0e2ad8] <==
	I1217 00:41:45.398305       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1217 00:41:45.455120       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 00:41:45.486112       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1217 00:41:45.486143       1 shared_informer.go:318] Caches are synced for configmaps
	I1217 00:41:45.486150       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 00:41:45.486296       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1217 00:41:45.486312       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1217 00:41:45.486314       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1217 00:41:45.487245       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1217 00:41:45.487275       1 aggregator.go:166] initial CRD sync complete...
	I1217 00:41:45.487282       1 autoregister_controller.go:141] Starting autoregister controller
	I1217 00:41:45.487287       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 00:41:45.487293       1 cache.go:39] Caches are synced for autoregister controller
	I1217 00:41:45.526772       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1217 00:41:46.390543       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 00:41:46.507761       1 controller.go:624] quota admission added evaluator for: namespaces
	I1217 00:41:46.541123       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1217 00:41:46.561352       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 00:41:46.570506       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 00:41:46.579187       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1217 00:41:46.624822       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.70.2"}
	I1217 00:41:46.640679       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.165.81"}
	I1217 00:41:57.658403       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1217 00:41:57.806464       1 controller.go:624] quota admission added evaluator for: endpoints
	I1217 00:41:57.957657       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0051bcc55466b549d043d19c7acbc02084dfafcf4a1b9fd1b4704776608fde49] <==
	I1217 00:41:57.807738       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1217 00:41:57.807750       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1217 00:41:57.807760       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1217 00:41:57.833521       1 shared_informer.go:318] Caches are synced for taint
	I1217 00:41:57.833658       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1217 00:41:57.833705       1 taint_manager.go:211] "Sending events to api server"
	I1217 00:41:57.833751       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1217 00:41:57.833838       1 event.go:307] "Event occurred" object="old-k8s-version-742860" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-742860 event: Registered Node old-k8s-version-742860 in Controller"
	I1217 00:41:57.833873       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="old-k8s-version-742860"
	I1217 00:41:57.833941       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1217 00:41:57.838283       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1217 00:41:57.855414       1 shared_informer.go:318] Caches are synced for GC
	I1217 00:41:57.857776       1 shared_informer.go:318] Caches are synced for daemon sets
	I1217 00:41:58.184774       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 00:41:58.243660       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 00:41:58.243695       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1217 00:42:03.122171       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.321492ms"
	I1217 00:42:03.122609       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="80.2µs"
	I1217 00:42:06.153948       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.334µs"
	I1217 00:42:07.189770       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.919µs"
	I1217 00:42:08.112453       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="186.92µs"
	I1217 00:42:17.087360       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="229.166651ms"
	I1217 00:42:17.087483       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.777µs"
	I1217 00:42:23.154969       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.953µs"
	I1217 00:42:28.015263       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="107.308µs"
	
	
	==> kube-proxy [98e5f84dacacedac773b65d9a13392572f7924b62854fb99ecd793603a8f1d34] <==
	I1217 00:41:46.427068       1 server_others.go:69] "Using iptables proxy"
	I1217 00:41:46.439338       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1217 00:41:46.459891       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 00:41:46.462451       1 server_others.go:152] "Using iptables Proxier"
	I1217 00:41:46.462480       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1217 00:41:46.462487       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1217 00:41:46.462517       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1217 00:41:46.462798       1 server.go:846] "Version info" version="v1.28.0"
	I1217 00:41:46.462859       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:41:46.463606       1 config.go:315] "Starting node config controller"
	I1217 00:41:46.463678       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1217 00:41:46.464791       1 config.go:188] "Starting service config controller"
	I1217 00:41:46.464949       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1217 00:41:46.465073       1 config.go:97] "Starting endpoint slice config controller"
	I1217 00:41:46.465337       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1217 00:41:46.564631       1 shared_informer.go:318] Caches are synced for node config
	I1217 00:41:46.565859       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1217 00:41:46.565886       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [d2bacdc7b5ee7149039abbb534298bb0d1c50567e36970b8dde0a69f80ccd23c] <==
	I1217 00:41:43.139435       1 serving.go:348] Generated self-signed cert in-memory
	W1217 00:41:45.433417       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 00:41:45.433470       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 00:41:45.433484       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 00:41:45.433494       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 00:41:45.456075       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1217 00:41:45.456116       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:41:45.460155       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:41:45.460194       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1217 00:41:45.460437       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1217 00:41:45.460499       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1217 00:41:45.560642       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 17 00:41:57 old-k8s-version-742860 kubelet[727]: I1217 00:41:57.700298     727 topology_manager.go:215] "Topology Admit Handler" podUID="ea6229a9-6cc8-4a75-a422-59e0f08b134d" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-hl62s"
	Dec 17 00:41:57 old-k8s-version-742860 kubelet[727]: I1217 00:41:57.808809     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7becd935-9668-4f00-b6fd-b0fd758c3d67-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-hbb7l\" (UID: \"7becd935-9668-4f00-b6fd-b0fd758c3d67\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l"
	Dec 17 00:41:57 old-k8s-version-742860 kubelet[727]: I1217 00:41:57.808855     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzwmj\" (UniqueName: \"kubernetes.io/projected/7becd935-9668-4f00-b6fd-b0fd758c3d67-kube-api-access-dzwmj\") pod \"dashboard-metrics-scraper-5f989dc9cf-hbb7l\" (UID: \"7becd935-9668-4f00-b6fd-b0fd758c3d67\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l"
	Dec 17 00:41:57 old-k8s-version-742860 kubelet[727]: I1217 00:41:57.808882     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8xzg\" (UniqueName: \"kubernetes.io/projected/ea6229a9-6cc8-4a75-a422-59e0f08b134d-kube-api-access-z8xzg\") pod \"kubernetes-dashboard-8694d4445c-hl62s\" (UID: \"ea6229a9-6cc8-4a75-a422-59e0f08b134d\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-hl62s"
	Dec 17 00:41:57 old-k8s-version-742860 kubelet[727]: I1217 00:41:57.808912     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ea6229a9-6cc8-4a75-a422-59e0f08b134d-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-hl62s\" (UID: \"ea6229a9-6cc8-4a75-a422-59e0f08b134d\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-hl62s"
	Dec 17 00:42:06 old-k8s-version-742860 kubelet[727]: I1217 00:42:06.093315     727 scope.go:117] "RemoveContainer" containerID="4d4e827663dd03284a764ac2f5372619101f2e614e72c6ba517394082e67c65c"
	Dec 17 00:42:06 old-k8s-version-742860 kubelet[727]: I1217 00:42:06.156053     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-hl62s" podStartSLOduration=4.984301428 podCreationTimestamp="2025-12-17 00:41:57 +0000 UTC" firstStartedPulling="2025-12-17 00:41:58.032096628 +0000 UTC m=+16.133473131" lastFinishedPulling="2025-12-17 00:42:02.20372331 +0000 UTC m=+20.305099818" observedRunningTime="2025-12-17 00:42:03.110583698 +0000 UTC m=+21.211960226" watchObservedRunningTime="2025-12-17 00:42:06.155928115 +0000 UTC m=+24.257304623"
	Dec 17 00:42:07 old-k8s-version-742860 kubelet[727]: I1217 00:42:07.098043     727 scope.go:117] "RemoveContainer" containerID="4d4e827663dd03284a764ac2f5372619101f2e614e72c6ba517394082e67c65c"
	Dec 17 00:42:07 old-k8s-version-742860 kubelet[727]: I1217 00:42:07.098238     727 scope.go:117] "RemoveContainer" containerID="324ad454b6f2d48651270a786dfd3cf704b868b70b7f2becfd1846029681442c"
	Dec 17 00:42:07 old-k8s-version-742860 kubelet[727]: E1217 00:42:07.098627     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hbb7l_kubernetes-dashboard(7becd935-9668-4f00-b6fd-b0fd758c3d67)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l" podUID="7becd935-9668-4f00-b6fd-b0fd758c3d67"
	Dec 17 00:42:08 old-k8s-version-742860 kubelet[727]: I1217 00:42:08.102251     727 scope.go:117] "RemoveContainer" containerID="324ad454b6f2d48651270a786dfd3cf704b868b70b7f2becfd1846029681442c"
	Dec 17 00:42:08 old-k8s-version-742860 kubelet[727]: E1217 00:42:08.102600     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hbb7l_kubernetes-dashboard(7becd935-9668-4f00-b6fd-b0fd758c3d67)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l" podUID="7becd935-9668-4f00-b6fd-b0fd758c3d67"
	Dec 17 00:42:09 old-k8s-version-742860 kubelet[727]: I1217 00:42:09.103910     727 scope.go:117] "RemoveContainer" containerID="324ad454b6f2d48651270a786dfd3cf704b868b70b7f2becfd1846029681442c"
	Dec 17 00:42:09 old-k8s-version-742860 kubelet[727]: E1217 00:42:09.104226     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hbb7l_kubernetes-dashboard(7becd935-9668-4f00-b6fd-b0fd758c3d67)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l" podUID="7becd935-9668-4f00-b6fd-b0fd758c3d67"
	Dec 17 00:42:17 old-k8s-version-742860 kubelet[727]: I1217 00:42:17.122637     727 scope.go:117] "RemoveContainer" containerID="2665e23f8c1d4b1b60afb71c02d698261aab64c7615ff9ebd12d544814363589"
	Dec 17 00:42:23 old-k8s-version-742860 kubelet[727]: I1217 00:42:23.007069     727 scope.go:117] "RemoveContainer" containerID="324ad454b6f2d48651270a786dfd3cf704b868b70b7f2becfd1846029681442c"
	Dec 17 00:42:23 old-k8s-version-742860 kubelet[727]: I1217 00:42:23.142152     727 scope.go:117] "RemoveContainer" containerID="324ad454b6f2d48651270a786dfd3cf704b868b70b7f2becfd1846029681442c"
	Dec 17 00:42:23 old-k8s-version-742860 kubelet[727]: I1217 00:42:23.142426     727 scope.go:117] "RemoveContainer" containerID="c2f5e2e55fdb212b11fe534765cef6051904899119c3b0f0d2895cdc1bad1d6c"
	Dec 17 00:42:23 old-k8s-version-742860 kubelet[727]: E1217 00:42:23.142799     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hbb7l_kubernetes-dashboard(7becd935-9668-4f00-b6fd-b0fd758c3d67)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l" podUID="7becd935-9668-4f00-b6fd-b0fd758c3d67"
	Dec 17 00:42:28 old-k8s-version-742860 kubelet[727]: I1217 00:42:28.005472     727 scope.go:117] "RemoveContainer" containerID="c2f5e2e55fdb212b11fe534765cef6051904899119c3b0f0d2895cdc1bad1d6c"
	Dec 17 00:42:28 old-k8s-version-742860 kubelet[727]: E1217 00:42:28.005831     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hbb7l_kubernetes-dashboard(7becd935-9668-4f00-b6fd-b0fd758c3d67)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hbb7l" podUID="7becd935-9668-4f00-b6fd-b0fd758c3d67"
	Dec 17 00:42:30 old-k8s-version-742860 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 00:42:31 old-k8s-version-742860 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 00:42:31 old-k8s-version-742860 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:42:31 old-k8s-version-742860 systemd[1]: kubelet.service: Consumed 1.447s CPU time.
	
	
	==> kubernetes-dashboard [7bfc386107bbed22f46f9153e98395f1b89a75e043668ed01443b61246824c81] <==
	2025/12/17 00:42:02 Starting overwatch
	2025/12/17 00:42:02 Using namespace: kubernetes-dashboard
	2025/12/17 00:42:02 Using in-cluster config to connect to apiserver
	2025/12/17 00:42:02 Using secret token for csrf signing
	2025/12/17 00:42:02 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 00:42:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 00:42:02 Successful initial request to the apiserver, version: v1.28.0
	2025/12/17 00:42:02 Generating JWE encryption key
	2025/12/17 00:42:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 00:42:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 00:42:02 Initializing JWE encryption key from synchronized object
	2025/12/17 00:42:02 Creating in-cluster Sidecar client
	2025/12/17 00:42:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 00:42:02 Serving insecurely on HTTP port: 9090
	2025/12/17 00:42:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2665e23f8c1d4b1b60afb71c02d698261aab64c7615ff9ebd12d544814363589] <==
	I1217 00:41:46.380551       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 00:42:16.383592       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a4b60651ffd030d19b761fd3c47918c5ebbe75733ceae5c5e50c3e69b44beebb] <==
	I1217 00:42:18.311917       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 00:42:18.321088       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 00:42:18.321216       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1217 00:42:35.719573       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 00:42:35.719667       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f96f46b2-0bc0-44bd-93ae-70942e078e0e", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-742860_08888708-7332-4e28-b30c-7fd9d545d98a became leader
	I1217 00:42:35.720372       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-742860_08888708-7332-4e28-b30c-7fd9d545d98a!
	I1217 00:42:35.820603       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-742860_08888708-7332-4e28-b30c-7fd9d545d98a!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-742860 -n old-k8s-version-742860
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-742860 -n old-k8s-version-742860: exit status 2 (339.859387ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-742860 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-153232 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-153232 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (312.850722ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:42:52Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-153232 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-153232 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-153232 describe deploy/metrics-server -n kube-system: exit status 1 (70.92209ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-153232 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-153232
helpers_test.go:244: (dbg) docker inspect embed-certs-153232:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0643874d17495c0c32f8432b12d57b11dd7085dfaf7906608f3a8753637c5a15",
	        "Created": "2025-12-17T00:42:07.386583477Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 282079,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:42:07.423409325Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/0643874d17495c0c32f8432b12d57b11dd7085dfaf7906608f3a8753637c5a15/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0643874d17495c0c32f8432b12d57b11dd7085dfaf7906608f3a8753637c5a15/hostname",
	        "HostsPath": "/var/lib/docker/containers/0643874d17495c0c32f8432b12d57b11dd7085dfaf7906608f3a8753637c5a15/hosts",
	        "LogPath": "/var/lib/docker/containers/0643874d17495c0c32f8432b12d57b11dd7085dfaf7906608f3a8753637c5a15/0643874d17495c0c32f8432b12d57b11dd7085dfaf7906608f3a8753637c5a15-json.log",
	        "Name": "/embed-certs-153232",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-153232:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-153232",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0643874d17495c0c32f8432b12d57b11dd7085dfaf7906608f3a8753637c5a15",
	                "LowerDir": "/var/lib/docker/overlay2/75e64cb888fdc80983d39325faeb17b16c0afd2693d7425dc490c93491959bb6-init/diff:/var/lib/docker/overlay2/594b812fd6d8db89dab322ea9e00d43dd555e9709fb5e6953e3873cce717392c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/75e64cb888fdc80983d39325faeb17b16c0afd2693d7425dc490c93491959bb6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/75e64cb888fdc80983d39325faeb17b16c0afd2693d7425dc490c93491959bb6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/75e64cb888fdc80983d39325faeb17b16c0afd2693d7425dc490c93491959bb6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-153232",
	                "Source": "/var/lib/docker/volumes/embed-certs-153232/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-153232",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-153232",
	                "name.minikube.sigs.k8s.io": "embed-certs-153232",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1f2647b5cfb15a6aade8495af4c1ba5fafd593bc168827b8c7bd520f35b95ffc",
	            "SandboxKey": "/var/run/docker/netns/1f2647b5cfb1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-153232": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a0b8f164bc66d742f404a41cb692119204b3085d963265276bc535b43e9a9723",
	                    "EndpointID": "f60c227284e644af3c1af2f39c108a5c5fe79536ace5dff09a867e83ce6d3ac6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "ca:5c:bd:e5:75:02",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-153232",
	                        "0643874d1749"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-153232 -n embed-certs-153232
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-153232 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-153232 logs -n 25: (1.299928916s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p old-k8s-version-742860 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:41 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-742860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │                     │
	│ stop    │ -p old-k8s-version-742860 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:41 UTC │
	│ delete  │ -p stopped-upgrade-028618                                                                                                                                                                                                                            │ stopped-upgrade-028618       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:41 UTC │
	│ start   │ -p no-preload-864613 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-742860 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:41 UTC │
	│ start   │ -p old-k8s-version-742860 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p cert-expiration-753607 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                            │ cert-expiration-753607       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:41 UTC │
	│ delete  │ -p cert-expiration-753607                                                                                                                                                                                                                            │ cert-expiration-753607       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p embed-certs-153232 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-803959    │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ start   │ -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-803959    │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ delete  │ -p kubernetes-upgrade-803959                                                                                                                                                                                                                         │ kubernetes-upgrade-803959    │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ delete  │ -p disable-driver-mounts-827138                                                                                                                                                                                                                      │ disable-driver-mounts-827138 │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p default-k8s-diff-port-414413 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-864613 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ stop    │ -p no-preload-864613 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ image   │ old-k8s-version-742860 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ pause   │ -p old-k8s-version-742860 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-864613 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p no-preload-864613 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ delete  │ -p old-k8s-version-742860                                                                                                                                                                                                                            │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ delete  │ -p old-k8s-version-742860                                                                                                                                                                                                                            │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p newest-cni-653717 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-153232 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:42:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:42:39.551621  292081 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:42:39.551901  292081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:42:39.551911  292081 out.go:374] Setting ErrFile to fd 2...
	I1217 00:42:39.551915  292081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:42:39.552166  292081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:42:39.552662  292081 out.go:368] Setting JSON to false
	I1217 00:42:39.553726  292081 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5109,"bootTime":1765927050,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:42:39.553780  292081 start.go:143] virtualization: kvm guest
	I1217 00:42:39.555553  292081 out.go:179] * [newest-cni-653717] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:42:39.556746  292081 notify.go:221] Checking for updates...
	I1217 00:42:39.556769  292081 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:42:39.557949  292081 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:42:39.559133  292081 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:42:39.560242  292081 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:42:39.561274  292081 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:42:39.563103  292081 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:42:39.564577  292081 config.go:182] Loaded profile config "default-k8s-diff-port-414413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:42:39.564675  292081 config.go:182] Loaded profile config "embed-certs-153232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:42:39.564782  292081 config.go:182] Loaded profile config "no-preload-864613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:42:39.564899  292081 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:42:39.590554  292081 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:42:39.590699  292081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:42:39.656559  292081 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 00:42:39.646099494 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:42:39.656663  292081 docker.go:319] overlay module found
	I1217 00:42:39.659088  292081 out.go:179] * Using the docker driver based on user configuration
	I1217 00:42:39.660142  292081 start.go:309] selected driver: docker
	I1217 00:42:39.660155  292081 start.go:927] validating driver "docker" against <nil>
	I1217 00:42:39.660166  292081 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:42:39.660774  292081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:42:39.722518  292081 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 00:42:39.711146936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:42:39.722723  292081 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 00:42:39.722757  292081 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 00:42:39.723072  292081 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 00:42:39.726391  292081 out.go:179] * Using Docker driver with root privileges
	I1217 00:42:39.727427  292081 cni.go:84] Creating CNI manager for ""
	I1217 00:42:39.727511  292081 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:42:39.727530  292081 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 00:42:39.727629  292081 start.go:353] cluster config:
	{Name:newest-cni-653717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-653717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:42:39.729007  292081 out.go:179] * Starting "newest-cni-653717" primary control-plane node in "newest-cni-653717" cluster
	I1217 00:42:39.729981  292081 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:42:39.731716  292081 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:42:39.732745  292081 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:42:39.732775  292081 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1217 00:42:39.732795  292081 cache.go:65] Caching tarball of preloaded images
	I1217 00:42:39.732856  292081 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:42:39.732901  292081 preload.go:238] Found /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 00:42:39.732916  292081 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1217 00:42:39.733047  292081 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/config.json ...
	I1217 00:42:39.733072  292081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/config.json: {Name:mkc027815a15326496ab2408383e384558a71cb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:39.754922  292081 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:42:39.754940  292081 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:42:39.754960  292081 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:42:39.755026  292081 start.go:360] acquireMachinesLock for newest-cni-653717: {Name:mk721025c3a21068c756325b281b92cea9d9d432 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:42:39.755136  292081 start.go:364] duration metric: took 91.503µs to acquireMachinesLock for "newest-cni-653717"
	I1217 00:42:39.755162  292081 start.go:93] Provisioning new machine with config: &{Name:newest-cni-653717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-653717 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:42:39.755339  292081 start.go:125] createHost starting for "" (driver="docker")
	I1217 00:42:34.990123  290128 out.go:252] * Restarting existing docker container for "no-preload-864613" ...
	I1217 00:42:34.990194  290128 cli_runner.go:164] Run: docker start no-preload-864613
	I1217 00:42:35.271109  290128 cli_runner.go:164] Run: docker container inspect no-preload-864613 --format={{.State.Status}}
	I1217 00:42:35.293226  290128 kic.go:430] container "no-preload-864613" state is running.
	I1217 00:42:35.293636  290128 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-864613
	I1217 00:42:35.318368  290128 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/config.json ...
	I1217 00:42:35.318647  290128 machine.go:94] provisionDockerMachine start ...
	I1217 00:42:35.318739  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:35.345305  290128 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:35.345550  290128 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1217 00:42:35.345563  290128 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:42:35.346319  290128 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48788->127.0.0.1:33083: read: connection reset by peer
	I1217 00:42:38.479735  290128 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-864613
	
	I1217 00:42:38.479761  290128 ubuntu.go:182] provisioning hostname "no-preload-864613"
	I1217 00:42:38.479822  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:38.498770  290128 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:38.499115  290128 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1217 00:42:38.499136  290128 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-864613 && echo "no-preload-864613" | sudo tee /etc/hostname
	I1217 00:42:38.637351  290128 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-864613
	
	I1217 00:42:38.637440  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:38.658230  290128 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:38.658487  290128 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1217 00:42:38.658515  290128 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-864613' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-864613/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-864613' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:42:38.788384  290128 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:42:38.788407  290128 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:42:38.788434  290128 ubuntu.go:190] setting up certificates
	I1217 00:42:38.788447  290128 provision.go:84] configureAuth start
	I1217 00:42:38.788515  290128 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-864613
	I1217 00:42:38.807940  290128 provision.go:143] copyHostCerts
	I1217 00:42:38.808027  290128 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:42:38.808047  290128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:42:38.808122  290128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:42:38.808261  290128 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:42:38.808286  290128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:42:38.808332  290128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:42:38.808431  290128 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:42:38.808442  290128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:42:38.808491  290128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:42:38.808580  290128 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.no-preload-864613 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-864613]
	I1217 00:42:38.892180  290128 provision.go:177] copyRemoteCerts
	I1217 00:42:38.892238  290128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:42:38.892281  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:38.911079  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:39.005310  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:42:39.023102  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:42:39.040760  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:42:39.058614  290128 provision.go:87] duration metric: took 270.146931ms to configureAuth
	I1217 00:42:39.058640  290128 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:42:39.058823  290128 config.go:182] Loaded profile config "no-preload-864613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:42:39.058943  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:39.077523  290128 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:39.077804  290128 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1217 00:42:39.077831  290128 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:42:39.439563  290128 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:42:39.439588  290128 machine.go:97] duration metric: took 4.120922822s to provisionDockerMachine
	I1217 00:42:39.439652  290128 start.go:293] postStartSetup for "no-preload-864613" (driver="docker")
	I1217 00:42:39.439674  290128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:42:39.439737  290128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:42:39.439779  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:39.458833  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:39.556864  290128 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:42:39.560960  290128 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:42:39.560985  290128 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:42:39.561021  290128 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:42:39.561074  290128 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:42:39.561192  290128 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:42:39.561332  290128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:42:39.569440  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:42:39.589195  290128 start.go:296] duration metric: took 149.524862ms for postStartSetup
	I1217 00:42:39.589264  290128 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:42:39.589306  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:39.611378  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:39.706544  290128 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:42:39.711286  290128 fix.go:56] duration metric: took 4.742462188s for fixHost
	I1217 00:42:39.711308  290128 start.go:83] releasing machines lock for "no-preload-864613", held for 4.742503801s
	I1217 00:42:39.711366  290128 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-864613
	I1217 00:42:39.731529  290128 ssh_runner.go:195] Run: cat /version.json
	I1217 00:42:39.731581  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:39.731644  290128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:42:39.731702  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:39.751519  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:39.752129  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:38.606421  284412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:39.107158  284412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:39.606578  284412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:40.107068  284412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:40.607475  284412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:41.106671  284412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:41.191185  284412 kubeadm.go:1114] duration metric: took 4.657497583s to wait for elevateKubeSystemPrivileges
	I1217 00:42:41.191228  284412 kubeadm.go:403] duration metric: took 15.676954898s to StartCluster
	I1217 00:42:41.191250  284412 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:41.191326  284412 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:42:41.193393  284412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:41.193647  284412 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:42:41.193813  284412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 00:42:41.193845  284412 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:42:41.193954  284412 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-414413"
	I1217 00:42:41.193968  284412 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-414413"
	I1217 00:42:41.193986  284412 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-414413"
	I1217 00:42:41.194024  284412 config.go:182] Loaded profile config "default-k8s-diff-port-414413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:42:41.194065  284412 host.go:66] Checking if "default-k8s-diff-port-414413" exists ...
	I1217 00:42:41.193999  284412 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-414413"
	I1217 00:42:41.194464  284412 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Status}}
	I1217 00:42:41.194643  284412 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Status}}
	I1217 00:42:41.199234  284412 out.go:179] * Verifying Kubernetes components...
	I1217 00:42:41.200842  284412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:41.223659  284412 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:42:39.842716  290128 ssh_runner.go:195] Run: systemctl --version
	I1217 00:42:39.901651  290128 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:42:39.939255  290128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:42:39.944540  290128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:42:39.944621  290128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:42:39.953110  290128 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:42:39.953139  290128 start.go:496] detecting cgroup driver to use...
	I1217 00:42:39.953172  290128 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:42:39.953213  290128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:42:39.969396  290128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:42:39.983260  290128 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:42:39.983311  290128 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:42:40.004183  290128 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:42:40.024650  290128 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:42:40.129134  290128 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:42:40.230889  290128 docker.go:234] disabling docker service ...
	I1217 00:42:40.230963  290128 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:42:40.249697  290128 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:42:40.263284  290128 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:42:40.366858  290128 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:42:40.456091  290128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:42:40.475522  290128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:42:40.491549  290128 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:42:40.491607  290128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:40.501573  290128 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:42:40.501637  290128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:40.512350  290128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:40.521014  290128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:40.529858  290128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:42:40.539315  290128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:40.548317  290128 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:40.556545  290128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:40.565397  290128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:42:40.572624  290128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:42:40.580318  290128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:40.683030  290128 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:42:41.204474  290128 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:42:41.204534  290128 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:42:41.212044  290128 start.go:564] Will wait 60s for crictl version
	I1217 00:42:41.212169  290128 ssh_runner.go:195] Run: which crictl
	I1217 00:42:41.218032  290128 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:42:41.259772  290128 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:42:41.259870  290128 ssh_runner.go:195] Run: crio --version
	I1217 00:42:41.301952  290128 ssh_runner.go:195] Run: crio --version
	I1217 00:42:41.350647  290128 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1217 00:42:41.224598  284412 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-414413"
	I1217 00:42:41.224648  284412 host.go:66] Checking if "default-k8s-diff-port-414413" exists ...
	I1217 00:42:41.224951  284412 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:42:41.224967  284412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:42:41.225071  284412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:42:41.225154  284412 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Status}}
	I1217 00:42:41.259312  284412 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:42:41.259335  284412 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:42:41.259392  284412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:42:41.263451  284412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:42:41.284213  284412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:42:41.318678  284412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 00:42:41.383381  284412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:42:41.407448  284412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:42:41.422819  284412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:42:41.574977  284412 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1217 00:42:41.576809  284412 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-414413" to be "Ready" ...
	I1217 00:42:41.847773  284412 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 00:42:41.351979  290128 cli_runner.go:164] Run: docker network inspect no-preload-864613 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:42:41.382246  290128 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 00:42:41.388749  290128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:42:41.407129  290128 kubeadm.go:884] updating cluster {Name:no-preload-864613 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-864613 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:42:41.407300  290128 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:42:41.407365  290128 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:42:41.460347  290128 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:42:41.461086  290128 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:42:41.461107  290128 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1217 00:42:41.461250  290128 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-864613 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-864613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:42:41.461345  290128 ssh_runner.go:195] Run: crio config
	I1217 00:42:41.540745  290128 cni.go:84] Creating CNI manager for ""
	I1217 00:42:41.540776  290128 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:42:41.540795  290128 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:42:41.540825  290128 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-864613 NodeName:no-preload-864613 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:42:41.541050  290128 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-864613"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:42:41.541130  290128 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:42:41.552225  290128 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:42:41.552302  290128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:42:41.563587  290128 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1217 00:42:41.582296  290128 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:42:41.601245  290128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1217 00:42:41.618842  290128 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:42:41.627097  290128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:42:41.640575  290128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:41.779494  290128 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:42:41.808680  290128 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613 for IP: 192.168.103.2
	I1217 00:42:41.808703  290128 certs.go:195] generating shared ca certs ...
	I1217 00:42:41.808722  290128 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:41.808901  290128 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:42:41.808964  290128 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:42:41.808977  290128 certs.go:257] generating profile certs ...
	I1217 00:42:41.809120  290128 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/client.key
	I1217 00:42:41.809192  290128 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/apiserver.key.74439f26
	I1217 00:42:41.809257  290128 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/proxy-client.key
	I1217 00:42:41.809398  290128 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:42:41.809440  290128 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:42:41.809456  290128 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:42:41.809498  290128 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:42:41.809536  290128 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:42:41.809574  290128 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:42:41.809636  290128 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:42:41.810241  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:42:41.835907  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:42:41.859930  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:42:41.882138  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:42:41.912524  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:42:41.936204  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 00:42:41.956213  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:42:41.975723  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:42:41.996532  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:42:42.014588  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:42:42.033145  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:42:42.050882  290128 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:42:42.063166  290128 ssh_runner.go:195] Run: openssl version
	I1217 00:42:42.069209  290128 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:42.078973  290128 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:42:42.087312  290128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:42.091173  290128 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:42.091229  290128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:42.127714  290128 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:42:42.135865  290128 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16354.pem
	I1217 00:42:42.144324  290128 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16354.pem /etc/ssl/certs/16354.pem
	I1217 00:42:42.151722  290128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16354.pem
	I1217 00:42:42.155308  290128 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:13 /usr/share/ca-certificates/16354.pem
	I1217 00:42:42.155360  290128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16354.pem
	I1217 00:42:42.194654  290128 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:42:42.204167  290128 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:42:42.212026  290128 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:42:42.219563  290128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:42:42.223628  290128 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:42:42.223682  290128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	I1217 00:42:42.276803  290128 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:42:42.285639  290128 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:42:42.290281  290128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:42:42.328285  290128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:42:42.379314  290128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:42:42.430181  290128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:42:42.491936  290128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:42:42.551969  290128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:42:42.595853  290128 kubeadm.go:401] StartCluster: {Name:no-preload-864613 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-864613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:42:42.595977  290128 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:42:42.596064  290128 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:42:42.632158  290128 cri.go:89] found id: "4b34ed74185a723d1987fd893c6b89aa61e85dd77a4391ea83bf44f5d07a0931"
	I1217 00:42:42.632183  290128 cri.go:89] found id: "a590d671bfa52ffb77f09298e606dd5a6cef506d25bf7c749bd516cf65fabaab"
	I1217 00:42:42.632191  290128 cri.go:89] found id: "a12cf220a059b218df62a14f9045f72149c1009f3507c8c36e206fdf43dc9d57"
	I1217 00:42:42.632202  290128 cri.go:89] found id: "d592a6ba05b7b5e2d53ffd9b29510a47348394c0b8faf29e99d49dce869dbeff"
	I1217 00:42:42.632208  290128 cri.go:89] found id: ""
	I1217 00:42:42.632258  290128 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 00:42:42.648079  290128 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:42:42Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:42:42.648152  290128 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:42:42.659743  290128 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:42:42.659831  290128 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:42:42.659957  290128 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:42:42.670583  290128 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:42:42.671843  290128 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-864613" does not appear in /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:42:42.672610  290128 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-12816/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-864613" cluster setting kubeconfig missing "no-preload-864613" context setting]
	I1217 00:42:42.673849  290128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:42.676258  290128 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:42:42.685491  290128 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1217 00:42:42.685528  290128 kubeadm.go:602] duration metric: took 25.615797ms to restartPrimaryControlPlane
	I1217 00:42:42.685540  290128 kubeadm.go:403] duration metric: took 89.695231ms to StartCluster
	I1217 00:42:42.685558  290128 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:42.685612  290128 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:42:42.687715  290128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:42.687977  290128 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:42:42.688235  290128 config.go:182] Loaded profile config "no-preload-864613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:42:42.688305  290128 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:42:42.688442  290128 addons.go:70] Setting storage-provisioner=true in profile "no-preload-864613"
	I1217 00:42:42.688465  290128 addons.go:239] Setting addon storage-provisioner=true in "no-preload-864613"
	I1217 00:42:42.688466  290128 addons.go:70] Setting dashboard=true in profile "no-preload-864613"
	W1217 00:42:42.688473  290128 addons.go:248] addon storage-provisioner should already be in state true
	I1217 00:42:42.688487  290128 addons.go:70] Setting default-storageclass=true in profile "no-preload-864613"
	I1217 00:42:42.688491  290128 addons.go:239] Setting addon dashboard=true in "no-preload-864613"
	W1217 00:42:42.688504  290128 addons.go:248] addon dashboard should already be in state true
	I1217 00:42:42.688508  290128 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-864613"
	I1217 00:42:42.688534  290128 host.go:66] Checking if "no-preload-864613" exists ...
	I1217 00:42:42.688565  290128 host.go:66] Checking if "no-preload-864613" exists ...
	I1217 00:42:42.688902  290128 cli_runner.go:164] Run: docker container inspect no-preload-864613 --format={{.State.Status}}
	I1217 00:42:42.689014  290128 cli_runner.go:164] Run: docker container inspect no-preload-864613 --format={{.State.Status}}
	I1217 00:42:42.689031  290128 cli_runner.go:164] Run: docker container inspect no-preload-864613 --format={{.State.Status}}
	I1217 00:42:42.696229  290128 out.go:179] * Verifying Kubernetes components...
	I1217 00:42:42.698403  290128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:42.723527  290128 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 00:42:42.724785  290128 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 00:42:42.725948  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 00:42:42.726058  290128 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 00:42:42.726130  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:42.727005  290128 addons.go:239] Setting addon default-storageclass=true in "no-preload-864613"
	W1217 00:42:42.727021  290128 addons.go:248] addon default-storageclass should already be in state true
	I1217 00:42:42.727055  290128 host.go:66] Checking if "no-preload-864613" exists ...
	I1217 00:42:42.727489  290128 cli_runner.go:164] Run: docker container inspect no-preload-864613 --format={{.State.Status}}
	I1217 00:42:42.730761  290128 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1217 00:42:39.742533  280822 node_ready.go:57] node "embed-certs-153232" has "Ready":"False" status (will retry)
	I1217 00:42:41.749292  280822 node_ready.go:49] node "embed-certs-153232" is "Ready"
	I1217 00:42:41.749331  280822 node_ready.go:38] duration metric: took 11.010585734s for node "embed-certs-153232" to be "Ready" ...
	I1217 00:42:41.749349  280822 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:42:41.749405  280822 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:41.774188  280822 api_server.go:72] duration metric: took 11.489358576s to wait for apiserver process to appear ...
	I1217 00:42:41.774225  280822 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:42:41.774250  280822 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 00:42:41.783349  280822 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 00:42:41.784553  280822 api_server.go:141] control plane version: v1.34.2
	I1217 00:42:41.784584  280822 api_server.go:131] duration metric: took 10.351149ms to wait for apiserver health ...
	I1217 00:42:41.784596  280822 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:42:41.788662  280822 system_pods.go:59] 8 kube-system pods found
	I1217 00:42:41.788701  280822 system_pods.go:61] "coredns-66bc5c9577-vtspd" [aedf434b-e03e-479c-a8f2-199e28231d61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:41.788711  280822 system_pods.go:61] "etcd-embed-certs-153232" [68a7a631-c79e-48d1-bd8d-1aafc2b61fcc] Running
	I1217 00:42:41.788718  280822 system_pods.go:61] "kindnet-zffzt" [f06f5d73-eef9-4876-b0aa-862d58c18777] Running
	I1217 00:42:41.788724  280822 system_pods.go:61] "kube-apiserver-embed-certs-153232" [a0a484be-31c5-4471-b35c-7d059d9e1b00] Running
	I1217 00:42:41.788736  280822 system_pods.go:61] "kube-controller-manager-embed-certs-153232" [6fd01afb-bd8e-450b-9082-310ff94c5958] Running
	I1217 00:42:41.788741  280822 system_pods.go:61] "kube-proxy-82b8k" [68026912-6bcc-4aee-b806-51f967dc200f] Running
	I1217 00:42:41.788746  280822 system_pods.go:61] "kube-scheduler-embed-certs-153232" [af854f70-8bef-44c5-ad64-197a3282d5c3] Running
	I1217 00:42:41.788794  280822 system_pods.go:61] "storage-provisioner" [ad4a1982-2da6-490d-bcba-f04782d2d9b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:41.788808  280822 system_pods.go:74] duration metric: took 4.204985ms to wait for pod list to return data ...
	I1217 00:42:41.788822  280822 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:42:41.793561  280822 default_sa.go:45] found service account: "default"
	I1217 00:42:41.793587  280822 default_sa.go:55] duration metric: took 4.758694ms for default service account to be created ...
	I1217 00:42:41.793600  280822 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 00:42:41.889984  280822 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:41.890037  280822 system_pods.go:89] "coredns-66bc5c9577-vtspd" [aedf434b-e03e-479c-a8f2-199e28231d61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:41.890046  280822 system_pods.go:89] "etcd-embed-certs-153232" [68a7a631-c79e-48d1-bd8d-1aafc2b61fcc] Running
	I1217 00:42:41.890056  280822 system_pods.go:89] "kindnet-zffzt" [f06f5d73-eef9-4876-b0aa-862d58c18777] Running
	I1217 00:42:41.890063  280822 system_pods.go:89] "kube-apiserver-embed-certs-153232" [a0a484be-31c5-4471-b35c-7d059d9e1b00] Running
	I1217 00:42:41.890073  280822 system_pods.go:89] "kube-controller-manager-embed-certs-153232" [6fd01afb-bd8e-450b-9082-310ff94c5958] Running
	I1217 00:42:41.890078  280822 system_pods.go:89] "kube-proxy-82b8k" [68026912-6bcc-4aee-b806-51f967dc200f] Running
	I1217 00:42:41.890085  280822 system_pods.go:89] "kube-scheduler-embed-certs-153232" [af854f70-8bef-44c5-ad64-197a3282d5c3] Running
	I1217 00:42:41.890095  280822 system_pods.go:89] "storage-provisioner" [ad4a1982-2da6-490d-bcba-f04782d2d9b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:41.890123  280822 retry.go:31] will retry after 248.746676ms: missing components: kube-dns
	I1217 00:42:42.142494  280822 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:42.142524  280822 system_pods.go:89] "coredns-66bc5c9577-vtspd" [aedf434b-e03e-479c-a8f2-199e28231d61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:42.142538  280822 system_pods.go:89] "etcd-embed-certs-153232" [68a7a631-c79e-48d1-bd8d-1aafc2b61fcc] Running
	I1217 00:42:42.142546  280822 system_pods.go:89] "kindnet-zffzt" [f06f5d73-eef9-4876-b0aa-862d58c18777] Running
	I1217 00:42:42.142550  280822 system_pods.go:89] "kube-apiserver-embed-certs-153232" [a0a484be-31c5-4471-b35c-7d059d9e1b00] Running
	I1217 00:42:42.142554  280822 system_pods.go:89] "kube-controller-manager-embed-certs-153232" [6fd01afb-bd8e-450b-9082-310ff94c5958] Running
	I1217 00:42:42.142557  280822 system_pods.go:89] "kube-proxy-82b8k" [68026912-6bcc-4aee-b806-51f967dc200f] Running
	I1217 00:42:42.142560  280822 system_pods.go:89] "kube-scheduler-embed-certs-153232" [af854f70-8bef-44c5-ad64-197a3282d5c3] Running
	I1217 00:42:42.142565  280822 system_pods.go:89] "storage-provisioner" [ad4a1982-2da6-490d-bcba-f04782d2d9b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:42.142577  280822 retry.go:31] will retry after 366.812444ms: missing components: kube-dns
	I1217 00:42:42.514253  280822 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:42.514281  280822 system_pods.go:89] "coredns-66bc5c9577-vtspd" [aedf434b-e03e-479c-a8f2-199e28231d61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:42.514287  280822 system_pods.go:89] "etcd-embed-certs-153232" [68a7a631-c79e-48d1-bd8d-1aafc2b61fcc] Running
	I1217 00:42:42.514293  280822 system_pods.go:89] "kindnet-zffzt" [f06f5d73-eef9-4876-b0aa-862d58c18777] Running
	I1217 00:42:42.514296  280822 system_pods.go:89] "kube-apiserver-embed-certs-153232" [a0a484be-31c5-4471-b35c-7d059d9e1b00] Running
	I1217 00:42:42.514300  280822 system_pods.go:89] "kube-controller-manager-embed-certs-153232" [6fd01afb-bd8e-450b-9082-310ff94c5958] Running
	I1217 00:42:42.514304  280822 system_pods.go:89] "kube-proxy-82b8k" [68026912-6bcc-4aee-b806-51f967dc200f] Running
	I1217 00:42:42.514307  280822 system_pods.go:89] "kube-scheduler-embed-certs-153232" [af854f70-8bef-44c5-ad64-197a3282d5c3] Running
	I1217 00:42:42.514312  280822 system_pods.go:89] "storage-provisioner" [ad4a1982-2da6-490d-bcba-f04782d2d9b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:42.514399  280822 retry.go:31] will retry after 333.656577ms: missing components: kube-dns
	I1217 00:42:42.853133  280822 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:42.853164  280822 system_pods.go:89] "coredns-66bc5c9577-vtspd" [aedf434b-e03e-479c-a8f2-199e28231d61] Running
	I1217 00:42:42.853172  280822 system_pods.go:89] "etcd-embed-certs-153232" [68a7a631-c79e-48d1-bd8d-1aafc2b61fcc] Running
	I1217 00:42:42.853177  280822 system_pods.go:89] "kindnet-zffzt" [f06f5d73-eef9-4876-b0aa-862d58c18777] Running
	I1217 00:42:42.853183  280822 system_pods.go:89] "kube-apiserver-embed-certs-153232" [a0a484be-31c5-4471-b35c-7d059d9e1b00] Running
	I1217 00:42:42.853190  280822 system_pods.go:89] "kube-controller-manager-embed-certs-153232" [6fd01afb-bd8e-450b-9082-310ff94c5958] Running
	I1217 00:42:42.853195  280822 system_pods.go:89] "kube-proxy-82b8k" [68026912-6bcc-4aee-b806-51f967dc200f] Running
	I1217 00:42:42.853200  280822 system_pods.go:89] "kube-scheduler-embed-certs-153232" [af854f70-8bef-44c5-ad64-197a3282d5c3] Running
	I1217 00:42:42.853205  280822 system_pods.go:89] "storage-provisioner" [ad4a1982-2da6-490d-bcba-f04782d2d9b8] Running
	I1217 00:42:42.853214  280822 system_pods.go:126] duration metric: took 1.059606129s to wait for k8s-apps to be running ...
	I1217 00:42:42.853227  280822 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 00:42:42.853279  280822 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:42:42.869286  280822 system_svc.go:56] duration metric: took 16.049777ms WaitForService to wait for kubelet
	I1217 00:42:42.869316  280822 kubeadm.go:587] duration metric: took 12.584493992s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:42:42.869340  280822 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:42:42.872567  280822 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 00:42:42.872595  280822 node_conditions.go:123] node cpu capacity is 8
	I1217 00:42:42.872609  280822 node_conditions.go:105] duration metric: took 3.264541ms to run NodePressure ...
	I1217 00:42:42.872621  280822 start.go:242] waiting for startup goroutines ...
	I1217 00:42:42.872628  280822 start.go:247] waiting for cluster config update ...
	I1217 00:42:42.872641  280822 start.go:256] writing updated cluster config ...
	I1217 00:42:42.872974  280822 ssh_runner.go:195] Run: rm -f paused
	I1217 00:42:42.877546  280822 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:42:42.881940  280822 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vtspd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:42.886950  280822 pod_ready.go:94] pod "coredns-66bc5c9577-vtspd" is "Ready"
	I1217 00:42:42.886970  280822 pod_ready.go:86] duration metric: took 4.999829ms for pod "coredns-66bc5c9577-vtspd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:42.889527  280822 pod_ready.go:83] waiting for pod "etcd-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:42.895847  280822 pod_ready.go:94] pod "etcd-embed-certs-153232" is "Ready"
	I1217 00:42:42.895869  280822 pod_ready.go:86] duration metric: took 6.325871ms for pod "etcd-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:42.898281  280822 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:42.902688  280822 pod_ready.go:94] pod "kube-apiserver-embed-certs-153232" is "Ready"
	I1217 00:42:42.902710  280822 pod_ready.go:86] duration metric: took 4.408331ms for pod "kube-apiserver-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:42.905039  280822 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:41.849058  284412 addons.go:530] duration metric: took 655.212128ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 00:42:42.080776  284412 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-414413" context rescaled to 1 replicas
	I1217 00:42:43.281597  280822 pod_ready.go:94] pod "kube-controller-manager-embed-certs-153232" is "Ready"
	I1217 00:42:43.281626  280822 pod_ready.go:86] duration metric: took 376.5674ms for pod "kube-controller-manager-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:43.484610  280822 pod_ready.go:83] waiting for pod "kube-proxy-82b8k" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:43.960602  280822 pod_ready.go:94] pod "kube-proxy-82b8k" is "Ready"
	I1217 00:42:43.960650  280822 pod_ready.go:86] duration metric: took 476.012578ms for pod "kube-proxy-82b8k" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:44.099686  280822 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:44.482807  280822 pod_ready.go:94] pod "kube-scheduler-embed-certs-153232" is "Ready"
	I1217 00:42:44.482862  280822 pod_ready.go:86] duration metric: took 383.141625ms for pod "kube-scheduler-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:44.482879  280822 pod_ready.go:40] duration metric: took 1.605302389s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:42:44.546591  280822 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1217 00:42:44.548075  280822 out.go:179] * Done! kubectl is now configured to use "embed-certs-153232" cluster and "default" namespace by default
	I1217 00:42:39.757771  292081 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 00:42:39.758042  292081 start.go:159] libmachine.API.Create for "newest-cni-653717" (driver="docker")
	I1217 00:42:39.758083  292081 client.go:173] LocalClient.Create starting
	I1217 00:42:39.758162  292081 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem
	I1217 00:42:39.758203  292081 main.go:143] libmachine: Decoding PEM data...
	I1217 00:42:39.758225  292081 main.go:143] libmachine: Parsing certificate...
	I1217 00:42:39.758288  292081 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem
	I1217 00:42:39.758312  292081 main.go:143] libmachine: Decoding PEM data...
	I1217 00:42:39.758329  292081 main.go:143] libmachine: Parsing certificate...
	I1217 00:42:39.758773  292081 cli_runner.go:164] Run: docker network inspect newest-cni-653717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 00:42:39.776750  292081 cli_runner.go:211] docker network inspect newest-cni-653717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 00:42:39.776824  292081 network_create.go:284] running [docker network inspect newest-cni-653717] to gather additional debugging logs...
	I1217 00:42:39.776846  292081 cli_runner.go:164] Run: docker network inspect newest-cni-653717
	W1217 00:42:39.795539  292081 cli_runner.go:211] docker network inspect newest-cni-653717 returned with exit code 1
	I1217 00:42:39.795568  292081 network_create.go:287] error running [docker network inspect newest-cni-653717]: docker network inspect newest-cni-653717: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-653717 not found
	I1217 00:42:39.795583  292081 network_create.go:289] output of [docker network inspect newest-cni-653717]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-653717 not found
	
	** /stderr **
	I1217 00:42:39.795681  292081 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:42:39.813581  292081 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ffd1d738f01 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3d:52:75:47:82} reservation:<nil>}
	I1217 00:42:39.814315  292081 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-280edd437675 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:ae:02:b5:f9:a6} reservation:<nil>}
	I1217 00:42:39.815124  292081 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9f28d049043c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:3f:8e:e9:44:56} reservation:<nil>}
	I1217 00:42:39.815715  292081 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a57026acfc12 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:aa:e6:32:39:49:3b} reservation:<nil>}
	I1217 00:42:39.816283  292081 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-a0b8f164bc66 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ae:bf:0f:c2:a1:7a} reservation:<nil>}
	I1217 00:42:39.817094  292081 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed8b70}
	I1217 00:42:39.817124  292081 network_create.go:124] attempt to create docker network newest-cni-653717 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1217 00:42:39.817179  292081 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-653717 newest-cni-653717
	I1217 00:42:39.867249  292081 network_create.go:108] docker network newest-cni-653717 192.168.94.0/24 created
	I1217 00:42:39.867283  292081 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-653717" container
	I1217 00:42:39.867363  292081 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 00:42:39.884952  292081 cli_runner.go:164] Run: docker volume create newest-cni-653717 --label name.minikube.sigs.k8s.io=newest-cni-653717 --label created_by.minikube.sigs.k8s.io=true
	I1217 00:42:39.903653  292081 oci.go:103] Successfully created a docker volume newest-cni-653717
	I1217 00:42:39.903740  292081 cli_runner.go:164] Run: docker run --rm --name newest-cni-653717-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-653717 --entrypoint /usr/bin/test -v newest-cni-653717:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 00:42:40.332097  292081 oci.go:107] Successfully prepared a docker volume newest-cni-653717
	I1217 00:42:40.332180  292081 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:42:40.332197  292081 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 00:42:40.332280  292081 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-653717:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 00:42:44.481201  292081 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-653717:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.148853331s)
	I1217 00:42:44.481236  292081 kic.go:203] duration metric: took 4.149035302s to extract preloaded images to volume ...
	W1217 00:42:44.481343  292081 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 00:42:44.481388  292081 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 00:42:44.481435  292081 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 00:42:42.731891  290128 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:42:42.731907  290128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:42:42.731955  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:42.763570  290128 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:42:42.763694  290128 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:42:42.763796  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:42.767548  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:42.770701  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:42.798221  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:42.895839  290128 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:42:42.898131  290128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:42:42.919421  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 00:42:42.919446  290128 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 00:42:42.924414  290128 node_ready.go:35] waiting up to 6m0s for node "no-preload-864613" to be "Ready" ...
	I1217 00:42:42.925969  290128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:42:42.957244  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 00:42:42.957271  290128 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 00:42:42.992919  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 00:42:42.992940  290128 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 00:42:43.014226  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 00:42:43.014254  290128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 00:42:43.030103  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 00:42:43.030126  290128 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 00:42:43.045016  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 00:42:43.045040  290128 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 00:42:43.058207  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 00:42:43.058229  290128 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 00:42:43.073567  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 00:42:43.073591  290128 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 00:42:43.089409  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 00:42:43.089435  290128 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 00:42:43.104309  290128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 00:42:43.888759  290128 node_ready.go:49] node "no-preload-864613" is "Ready"
	I1217 00:42:43.888791  290128 node_ready.go:38] duration metric: took 964.340322ms for node "no-preload-864613" to be "Ready" ...
	I1217 00:42:43.888806  290128 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:42:43.888858  290128 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:44.730253  290128 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.832086371s)
	I1217 00:42:44.730302  290128 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.804306557s)
	I1217 00:42:44.730394  290128 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.626038593s)
	I1217 00:42:44.730467  290128 api_server.go:72] duration metric: took 2.042440177s to wait for apiserver process to appear ...
	I1217 00:42:44.730494  290128 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:42:44.730534  290128 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:42:44.732808  290128 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-864613 addons enable metrics-server
	
	I1217 00:42:44.736310  290128 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 00:42:44.736333  290128 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 00:42:44.739032  290128 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1217 00:42:44.740068  290128 addons.go:530] duration metric: took 2.051763832s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 00:42:44.554887  292081 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-653717 --name newest-cni-653717 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-653717 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-653717 --network newest-cni-653717 --ip 192.168.94.2 --volume newest-cni-653717:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 00:42:44.859934  292081 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Running}}
	I1217 00:42:44.879772  292081 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:42:44.902759  292081 cli_runner.go:164] Run: docker exec newest-cni-653717 stat /var/lib/dpkg/alternatives/iptables
	I1217 00:42:44.958456  292081 oci.go:144] the created container "newest-cni-653717" has a running status.
	I1217 00:42:44.958499  292081 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa...
	I1217 00:42:45.146969  292081 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 00:42:45.178425  292081 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:42:45.205673  292081 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 00:42:45.205749  292081 kic_runner.go:114] Args: [docker exec --privileged newest-cni-653717 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 00:42:45.272222  292081 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:42:45.302920  292081 machine.go:94] provisionDockerMachine start ...
	I1217 00:42:45.303079  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:45.332494  292081 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:45.332879  292081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1217 00:42:45.332905  292081 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:42:45.470045  292081 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-653717
	
	I1217 00:42:45.470072  292081 ubuntu.go:182] provisioning hostname "newest-cni-653717"
	I1217 00:42:45.470145  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:45.489669  292081 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:45.489903  292081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1217 00:42:45.489921  292081 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-653717 && echo "newest-cni-653717" | sudo tee /etc/hostname
	I1217 00:42:45.644161  292081 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-653717
	
	I1217 00:42:45.644290  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:45.670660  292081 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:45.670959  292081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1217 00:42:45.671001  292081 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-653717' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-653717/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-653717' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:42:45.810630  292081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:42:45.810662  292081 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:42:45.810686  292081 ubuntu.go:190] setting up certificates
	I1217 00:42:45.810696  292081 provision.go:84] configureAuth start
	I1217 00:42:45.810765  292081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-653717
	I1217 00:42:45.829459  292081 provision.go:143] copyHostCerts
	I1217 00:42:45.829525  292081 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:42:45.829539  292081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:42:45.829631  292081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:42:45.829741  292081 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:42:45.829751  292081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:42:45.829780  292081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:42:45.829850  292081 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:42:45.829858  292081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:42:45.829882  292081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:42:45.829934  292081 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.newest-cni-653717 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-653717]
	I1217 00:42:45.958055  292081 provision.go:177] copyRemoteCerts
	I1217 00:42:45.958127  292081 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:42:45.958174  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:45.984112  292081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:42:46.086012  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:42:46.104624  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:42:46.121927  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:42:46.138837  292081 provision.go:87] duration metric: took 328.114013ms to configureAuth
	I1217 00:42:46.138862  292081 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:42:46.139071  292081 config.go:182] Loaded profile config "newest-cni-653717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:42:46.139186  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:46.157087  292081 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:46.157347  292081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1217 00:42:46.157376  292081 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:42:46.424454  292081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:42:46.424492  292081 machine.go:97] duration metric: took 1.121525305s to provisionDockerMachine
	I1217 00:42:46.424503  292081 client.go:176] duration metric: took 6.666411162s to LocalClient.Create
	I1217 00:42:46.424518  292081 start.go:167] duration metric: took 6.666478769s to libmachine.API.Create "newest-cni-653717"
	I1217 00:42:46.424527  292081 start.go:293] postStartSetup for "newest-cni-653717" (driver="docker")
	I1217 00:42:46.424540  292081 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:42:46.424592  292081 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:42:46.424624  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:46.442796  292081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:42:46.536618  292081 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:42:46.540051  292081 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:42:46.540072  292081 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:42:46.540082  292081 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:42:46.540139  292081 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:42:46.540216  292081 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:42:46.540306  292081 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:42:46.547511  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:42:46.567395  292081 start.go:296] duration metric: took 142.85649ms for postStartSetup
	I1217 00:42:46.567722  292081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-653717
	I1217 00:42:46.586027  292081 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/config.json ...
	I1217 00:42:46.586297  292081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:42:46.586350  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:46.604529  292081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:42:46.695141  292081 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:42:46.700409  292081 start.go:128] duration metric: took 6.945052111s to createHost
	I1217 00:42:46.700434  292081 start.go:83] releasing machines lock for "newest-cni-653717", held for 6.94528556s
	I1217 00:42:46.700506  292081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-653717
	I1217 00:42:46.719971  292081 ssh_runner.go:195] Run: cat /version.json
	I1217 00:42:46.720049  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:46.720057  292081 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:42:46.720124  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:46.738390  292081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:42:46.738747  292081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:42:46.882381  292081 ssh_runner.go:195] Run: systemctl --version
	I1217 00:42:46.888882  292081 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:42:46.924064  292081 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:42:46.928655  292081 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:42:46.928703  292081 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:42:46.953084  292081 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 00:42:46.953107  292081 start.go:496] detecting cgroup driver to use...
	I1217 00:42:46.953139  292081 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:42:46.953190  292081 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:42:46.969605  292081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:42:46.981627  292081 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:42:46.981696  292081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:42:46.997969  292081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:42:47.015481  292081 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:42:47.102372  292081 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:42:47.195858  292081 docker.go:234] disabling docker service ...
	I1217 00:42:47.195927  292081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:42:47.214755  292081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:42:47.228327  292081 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:42:47.313282  292081 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:42:47.402263  292081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:42:47.415123  292081 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:42:47.429297  292081 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:42:47.429343  292081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:47.439140  292081 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:42:47.439181  292081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:47.447551  292081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:47.456120  292081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:47.464976  292081 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:42:47.472532  292081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:47.480749  292081 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:47.494427  292081 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:47.502977  292081 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:42:47.510020  292081 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:42:47.517810  292081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:47.603008  292081 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:42:47.760326  292081 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:42:47.760395  292081 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:42:47.764833  292081 start.go:564] Will wait 60s for crictl version
	I1217 00:42:47.764898  292081 ssh_runner.go:195] Run: which crictl
	I1217 00:42:47.768771  292081 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:42:47.794033  292081 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:42:47.794112  292081 ssh_runner.go:195] Run: crio --version
	I1217 00:42:47.825452  292081 ssh_runner.go:195] Run: crio --version
	I1217 00:42:47.857636  292081 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1217 00:42:47.858656  292081 cli_runner.go:164] Run: docker network inspect newest-cni-653717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:42:47.876540  292081 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1217 00:42:47.880551  292081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:42:47.891708  292081 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1217 00:42:43.579985  284412 node_ready.go:57] node "default-k8s-diff-port-414413" has "Ready":"False" status (will retry)
	W1217 00:42:45.580315  284412 node_ready.go:57] node "default-k8s-diff-port-414413" has "Ready":"False" status (will retry)
	W1217 00:42:47.580369  284412 node_ready.go:57] node "default-k8s-diff-port-414413" has "Ready":"False" status (will retry)
	I1217 00:42:47.892665  292081 kubeadm.go:884] updating cluster {Name:newest-cni-653717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-653717 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:42:47.892819  292081 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:42:47.892873  292081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:42:47.922682  292081 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:42:47.922702  292081 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:42:47.922742  292081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:42:47.948548  292081 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:42:47.948566  292081 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:42:47.948572  292081 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1217 00:42:47.948644  292081 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-653717 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-653717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:42:47.948706  292081 ssh_runner.go:195] Run: crio config
	I1217 00:42:47.998076  292081 cni.go:84] Creating CNI manager for ""
	I1217 00:42:47.998103  292081 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:42:47.998123  292081 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 00:42:47.998153  292081 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-653717 NodeName:newest-cni-653717 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:42:47.998316  292081 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-653717"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:42:47.998384  292081 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:42:48.008451  292081 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:42:48.008505  292081 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:42:48.018068  292081 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1217 00:42:48.032092  292081 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:42:48.046963  292081 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1217 00:42:48.058965  292081 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:42:48.062632  292081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:42:48.072208  292081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:48.155827  292081 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:42:48.181149  292081 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717 for IP: 192.168.94.2
	I1217 00:42:48.181168  292081 certs.go:195] generating shared ca certs ...
	I1217 00:42:48.181185  292081 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:48.181315  292081 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:42:48.181355  292081 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:42:48.181365  292081 certs.go:257] generating profile certs ...
	I1217 00:42:48.181431  292081 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/client.key
	I1217 00:42:48.181455  292081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/client.crt with IP's: []
	I1217 00:42:48.204435  292081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/client.crt ...
	I1217 00:42:48.204457  292081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/client.crt: {Name:mk706a547645679cf593c6b6b64a5b13d6509c3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:48.204624  292081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/client.key ...
	I1217 00:42:48.204643  292081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/client.key: {Name:mk2afcb3a7b31c81f1f103ac537112f286b679a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:48.204746  292081 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.key.17c07d81
	I1217 00:42:48.204762  292081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.crt.17c07d81 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1217 00:42:48.250524  292081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.crt.17c07d81 ...
	I1217 00:42:48.250546  292081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.crt.17c07d81: {Name:mk9b44a0d7e2e4ebfad604c15171baaa270cfc11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:48.250684  292081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.key.17c07d81 ...
	I1217 00:42:48.250696  292081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.key.17c07d81: {Name:mk49169f7d724cca6994caea611fcf0ceba24cbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:48.250766  292081 certs.go:382] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.crt.17c07d81 -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.crt
	I1217 00:42:48.250832  292081 certs.go:386] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.key.17c07d81 -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.key
	I1217 00:42:48.250890  292081 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.key
	I1217 00:42:48.250905  292081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.crt with IP's: []
	I1217 00:42:48.311073  292081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.crt ...
	I1217 00:42:48.311096  292081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.crt: {Name:mk191e919f78ff769818c78eee7f416c2b6c7966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:48.311228  292081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.key ...
	I1217 00:42:48.311240  292081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.key: {Name:mk28eaa7bd38fac072b93b2b9e0af2cc79a6b0d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:48.311403  292081 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:42:48.311447  292081 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:42:48.311462  292081 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:42:48.311499  292081 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:42:48.311527  292081 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:42:48.311550  292081 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:42:48.311593  292081 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:42:48.312140  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:42:48.330195  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:42:48.346815  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:42:48.363297  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:42:48.380352  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:42:48.396979  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 00:42:48.413283  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:42:48.429487  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:42:48.446870  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:42:48.465676  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:42:48.482658  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:42:48.499956  292081 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:42:48.511878  292081 ssh_runner.go:195] Run: openssl version
	I1217 00:42:48.517834  292081 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:42:48.524686  292081 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:42:48.531652  292081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:42:48.535189  292081 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:42:48.535244  292081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	I1217 00:42:48.572541  292081 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:42:48.580505  292081 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/163542.pem /etc/ssl/certs/3ec20f2e.0
	I1217 00:42:48.587403  292081 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:48.594664  292081 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:42:48.602161  292081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:48.605749  292081 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:48.605792  292081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:48.647175  292081 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:42:48.654933  292081 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 00:42:48.662929  292081 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16354.pem
	I1217 00:42:48.670361  292081 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16354.pem /etc/ssl/certs/16354.pem
	I1217 00:42:48.677425  292081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16354.pem
	I1217 00:42:48.680914  292081 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:13 /usr/share/ca-certificates/16354.pem
	I1217 00:42:48.680965  292081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16354.pem
	I1217 00:42:48.717977  292081 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:42:48.725584  292081 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16354.pem /etc/ssl/certs/51391683.0
	I1217 00:42:48.733342  292081 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:42:48.737247  292081 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 00:42:48.737296  292081 kubeadm.go:401] StartCluster: {Name:newest-cni-653717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-653717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:42:48.737379  292081 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:42:48.737429  292081 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:42:48.766866  292081 cri.go:89] found id: ""
	I1217 00:42:48.766920  292081 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:42:48.775388  292081 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:42:48.784570  292081 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:42:48.784637  292081 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:42:48.794346  292081 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:42:48.794366  292081 kubeadm.go:158] found existing configuration files:
	
	I1217 00:42:48.794414  292081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 00:42:48.804623  292081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:42:48.804684  292081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:42:48.814188  292081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 00:42:48.824205  292081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:42:48.824260  292081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:42:48.833632  292081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 00:42:48.843633  292081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:42:48.843687  292081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:42:48.852733  292081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 00:42:48.863156  292081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:42:48.863217  292081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:42:48.871629  292081 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:42:48.918628  292081 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 00:42:48.918706  292081 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:42:49.012576  292081 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:42:49.012694  292081 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 00:42:49.012779  292081 kubeadm.go:319] OS: Linux
	I1217 00:42:49.012850  292081 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:42:49.012934  292081 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:42:49.012981  292081 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:42:49.013068  292081 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:42:49.013147  292081 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:42:49.013231  292081 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:42:49.013306  292081 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:42:49.013350  292081 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 00:42:49.079170  292081 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:42:49.079317  292081 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:42:49.079463  292081 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:42:49.089927  292081 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:42:49.092958  292081 out.go:252]   - Generating certificates and keys ...
	I1217 00:42:49.093071  292081 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:42:49.093173  292081 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:42:49.222054  292081 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 00:42:49.258747  292081 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 00:42:49.400834  292081 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 00:42:49.535425  292081 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 00:42:45.231179  290128 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:42:45.239052  290128 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1217 00:42:45.240536  290128 api_server.go:141] control plane version: v1.35.0-beta.0
	I1217 00:42:45.240626  290128 api_server.go:131] duration metric: took 510.122414ms to wait for apiserver health ...
	I1217 00:42:45.240667  290128 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:42:45.245445  290128 system_pods.go:59] 8 kube-system pods found
	I1217 00:42:45.245514  290128 system_pods.go:61] "coredns-7d764666f9-6ql6r" [7fe29911-eb02-4cea-b42b-254fe65a4e65] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:45.245533  290128 system_pods.go:61] "etcd-no-preload-864613" [2cd02c45-52c1-43f0-8160-939b70247653] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 00:42:45.245594  290128 system_pods.go:61] "kindnet-bpf4x" [0b42df61-fef2-41ff-83f3-0abede84a5fb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 00:42:45.245612  290128 system_pods.go:61] "kube-apiserver-no-preload-864613" [039d37cf-0e0f-45fa-9d35-a0a4deb68c2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 00:42:45.245619  290128 system_pods.go:61] "kube-controller-manager-no-preload-864613" [bb99a38a-1b12-43f0-b562-96bca9e3f8fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 00:42:45.245625  290128 system_pods.go:61] "kube-proxy-2kddk" [7153c193-9583-4abd-a828-ec1dc91151e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 00:42:45.245630  290128 system_pods.go:61] "kube-scheduler-no-preload-864613" [10f61f47-8e53-41ce-b820-7e662dd29fcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 00:42:45.245675  290128 system_pods.go:61] "storage-provisioner" [bf26b73d-473d-43a0-bf42-4d69abdd9e31] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:45.245778  290128 system_pods.go:74] duration metric: took 5.02268ms to wait for pod list to return data ...
	I1217 00:42:45.245808  290128 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:42:45.249871  290128 default_sa.go:45] found service account: "default"
	I1217 00:42:45.249896  290128 default_sa.go:55] duration metric: took 4.070194ms for default service account to be created ...
	I1217 00:42:45.249909  290128 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 00:42:45.254534  290128 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:45.254572  290128 system_pods.go:89] "coredns-7d764666f9-6ql6r" [7fe29911-eb02-4cea-b42b-254fe65a4e65] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:45.254585  290128 system_pods.go:89] "etcd-no-preload-864613" [2cd02c45-52c1-43f0-8160-939b70247653] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 00:42:45.254594  290128 system_pods.go:89] "kindnet-bpf4x" [0b42df61-fef2-41ff-83f3-0abede84a5fb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 00:42:45.254603  290128 system_pods.go:89] "kube-apiserver-no-preload-864613" [039d37cf-0e0f-45fa-9d35-a0a4deb68c2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 00:42:45.254612  290128 system_pods.go:89] "kube-controller-manager-no-preload-864613" [bb99a38a-1b12-43f0-b562-96bca9e3f8fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 00:42:45.254620  290128 system_pods.go:89] "kube-proxy-2kddk" [7153c193-9583-4abd-a828-ec1dc91151e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 00:42:45.254634  290128 system_pods.go:89] "kube-scheduler-no-preload-864613" [10f61f47-8e53-41ce-b820-7e662dd29fcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 00:42:45.254647  290128 system_pods.go:89] "storage-provisioner" [bf26b73d-473d-43a0-bf42-4d69abdd9e31] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:45.254656  290128 system_pods.go:126] duration metric: took 4.73972ms to wait for k8s-apps to be running ...
	I1217 00:42:45.254666  290128 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 00:42:45.254716  290128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:42:45.275259  290128 system_svc.go:56] duration metric: took 20.587102ms WaitForService to wait for kubelet
	I1217 00:42:45.275297  290128 kubeadm.go:587] duration metric: took 2.587270544s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:42:45.275318  290128 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:42:45.285140  290128 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 00:42:45.285176  290128 node_conditions.go:123] node cpu capacity is 8
	I1217 00:42:45.285194  290128 node_conditions.go:105] duration metric: took 9.870357ms to run NodePressure ...
	I1217 00:42:45.285208  290128 start.go:242] waiting for startup goroutines ...
	I1217 00:42:45.285219  290128 start.go:247] waiting for cluster config update ...
	I1217 00:42:45.285233  290128 start.go:256] writing updated cluster config ...
	I1217 00:42:45.285542  290128 ssh_runner.go:195] Run: rm -f paused
	I1217 00:42:45.292170  290128 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:42:45.297160  290128 pod_ready.go:83] waiting for pod "coredns-7d764666f9-6ql6r" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 00:42:47.302980  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	W1217 00:42:49.303419  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	I1217 00:42:49.875900  292081 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 00:42:49.876099  292081 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-653717] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 00:42:49.960694  292081 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 00:42:49.960901  292081 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-653717] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 00:42:49.986333  292081 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 00:42:50.038475  292081 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 00:42:50.210231  292081 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 00:42:50.210371  292081 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:42:50.371871  292081 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:42:50.467844  292081 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:42:50.524877  292081 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:42:50.559110  292081 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:42:50.627240  292081 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:42:50.627953  292081 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:42:50.635874  292081 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Dec 17 00:42:41 embed-certs-153232 crio[776]: time="2025-12-17T00:42:41.751138169Z" level=info msg="Starting container: e94997b4652590d8cf6ac602518ff8b01b9ccdb4b56bd40418bfb0f1baa8941e" id=4c055ef3-8181-4432-ad3e-8604a150a4b6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:42:41 embed-certs-153232 crio[776]: time="2025-12-17T00:42:41.754330577Z" level=info msg="Started container" PID=1850 containerID=e94997b4652590d8cf6ac602518ff8b01b9ccdb4b56bd40418bfb0f1baa8941e description=kube-system/coredns-66bc5c9577-vtspd/coredns id=4c055ef3-8181-4432-ad3e-8604a150a4b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e79cad410ae91e8852c01825f7c0e935a46831306ffac7c78e44cef197e5f567
	Dec 17 00:42:45 embed-certs-153232 crio[776]: time="2025-12-17T00:42:45.048470523Z" level=info msg="Running pod sandbox: default/busybox/POD" id=2504e638-317b-4a87-863e-05f53b741ad3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:42:45 embed-certs-153232 crio[776]: time="2025-12-17T00:42:45.048539155Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:45 embed-certs-153232 crio[776]: time="2025-12-17T00:42:45.053907187Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ed300505eac447e2cc110400efed7838ca3185190afbe9333464ae0def40d12c UID:2ded7c57-a893-4051-8499-a73941ba914b NetNS:/var/run/netns/c006dfab-6605-458c-be5e-4b5aa3097932 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000e90440}] Aliases:map[]}"
	Dec 17 00:42:45 embed-certs-153232 crio[776]: time="2025-12-17T00:42:45.053940263Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 17 00:42:45 embed-certs-153232 crio[776]: time="2025-12-17T00:42:45.065627061Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ed300505eac447e2cc110400efed7838ca3185190afbe9333464ae0def40d12c UID:2ded7c57-a893-4051-8499-a73941ba914b NetNS:/var/run/netns/c006dfab-6605-458c-be5e-4b5aa3097932 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000e90440}] Aliases:map[]}"
	Dec 17 00:42:45 embed-certs-153232 crio[776]: time="2025-12-17T00:42:45.065801067Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 17 00:42:45 embed-certs-153232 crio[776]: time="2025-12-17T00:42:45.066740006Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 00:42:45 embed-certs-153232 crio[776]: time="2025-12-17T00:42:45.067862519Z" level=info msg="Ran pod sandbox ed300505eac447e2cc110400efed7838ca3185190afbe9333464ae0def40d12c with infra container: default/busybox/POD" id=2504e638-317b-4a87-863e-05f53b741ad3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:42:45 embed-certs-153232 crio[776]: time="2025-12-17T00:42:45.06923052Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9da54124-a23c-4007-9b0f-741c672ff620 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:42:45 embed-certs-153232 crio[776]: time="2025-12-17T00:42:45.069374493Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9da54124-a23c-4007-9b0f-741c672ff620 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:42:45 embed-certs-153232 crio[776]: time="2025-12-17T00:42:45.069421433Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=9da54124-a23c-4007-9b0f-741c672ff620 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:42:45 embed-certs-153232 crio[776]: time="2025-12-17T00:42:45.070183729Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=42b05bf1-d368-41f0-a345-16486e9557cd name=/runtime.v1.ImageService/PullImage
	Dec 17 00:42:45 embed-certs-153232 crio[776]: time="2025-12-17T00:42:45.072955327Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 17 00:42:45 embed-certs-153232 crio[776]: time="2025-12-17T00:42:45.72177857Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=42b05bf1-d368-41f0-a345-16486e9557cd name=/runtime.v1.ImageService/PullImage
	Dec 17 00:42:45 embed-certs-153232 crio[776]: time="2025-12-17T00:42:45.723529291Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dd5b1230-4e54-4a29-8c8b-91b2d7d69d45 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:42:45 embed-certs-153232 crio[776]: time="2025-12-17T00:42:45.725112846Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9215bbe1-2867-474a-88e9-7ac132f893c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:42:45 embed-certs-153232 crio[776]: time="2025-12-17T00:42:45.731243225Z" level=info msg="Creating container: default/busybox/busybox" id=2ddf5b9e-811d-4261-8969-e7fcd3bbf991 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:42:45 embed-certs-153232 crio[776]: time="2025-12-17T00:42:45.731370823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:45 embed-certs-153232 crio[776]: time="2025-12-17T00:42:45.735471102Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:45 embed-certs-153232 crio[776]: time="2025-12-17T00:42:45.736049798Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:45 embed-certs-153232 crio[776]: time="2025-12-17T00:42:45.764745816Z" level=info msg="Created container 2df6b594bc4f431e2338e3c85b067a56e461981af2c5c3d80e8f2d4dd9cecf1d: default/busybox/busybox" id=2ddf5b9e-811d-4261-8969-e7fcd3bbf991 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:42:45 embed-certs-153232 crio[776]: time="2025-12-17T00:42:45.765359488Z" level=info msg="Starting container: 2df6b594bc4f431e2338e3c85b067a56e461981af2c5c3d80e8f2d4dd9cecf1d" id=819a90a9-6c2d-4df1-a4a0-ba001fdfd6b6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:42:45 embed-certs-153232 crio[776]: time="2025-12-17T00:42:45.766984962Z" level=info msg="Started container" PID=1930 containerID=2df6b594bc4f431e2338e3c85b067a56e461981af2c5c3d80e8f2d4dd9cecf1d description=default/busybox/busybox id=819a90a9-6c2d-4df1-a4a0-ba001fdfd6b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed300505eac447e2cc110400efed7838ca3185190afbe9333464ae0def40d12c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	2df6b594bc4f4       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   ed300505eac44       busybox                                      default
	e94997b465259       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   e79cad410ae91       coredns-66bc5c9577-vtspd                     kube-system
	28c45b755476b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   91ad0c23a4d49       storage-provisioner                          kube-system
	4eec4722c72f2       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      22 seconds ago      Running             kube-proxy                0                   dc6108795514a       kube-proxy-82b8k                             kube-system
	8a8030c6215b6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      22 seconds ago      Running             kindnet-cni               0                   d6f6abf037f63       kindnet-zffzt                                kube-system
	11c39a5e43473       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      33 seconds ago      Running             kube-controller-manager   0                   5feb2e3a5852a       kube-controller-manager-embed-certs-153232   kube-system
	e873d9eb77a1f       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      33 seconds ago      Running             kube-scheduler            0                   ce6da546adea3       kube-scheduler-embed-certs-153232            kube-system
	b98d64bd3caef       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      33 seconds ago      Running             etcd                      0                   96ad58df58335       etcd-embed-certs-153232                      kube-system
	b957ef241567a       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      33 seconds ago      Running             kube-apiserver            0                   fe6e236511877       kube-apiserver-embed-certs-153232            kube-system
	
	
	==> coredns [e94997b4652590d8cf6ac602518ff8b01b9ccdb4b56bd40418bfb0f1baa8941e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36105 - 29967 "HINFO IN 7911738298528221531.5419242834623294701. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.117818736s
	
	
	==> describe nodes <==
	Name:               embed-certs-153232
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-153232
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=embed-certs-153232
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_42_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:42:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-153232
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:42:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:42:41 +0000   Wed, 17 Dec 2025 00:42:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:42:41 +0000   Wed, 17 Dec 2025 00:42:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:42:41 +0000   Wed, 17 Dec 2025 00:42:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 00:42:41 +0000   Wed, 17 Dec 2025 00:42:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-153232
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                5d400583-a23e-4e06-8ba1-0a6ece90e0c3
	  Boot ID:                    0e9cedc6-c46e-4354-b3d2-9272a8b33ae5
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-vtspd                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-embed-certs-153232                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-zffzt                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-embed-certs-153232             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-embed-certs-153232    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-82b8k                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-embed-certs-153232             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node embed-certs-153232 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node embed-certs-153232 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node embed-certs-153232 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s   node-controller  Node embed-certs-153232 event: Registered Node embed-certs-153232 in Controller
	  Normal  NodeReady                12s   kubelet          Node embed-certs-153232 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.089382] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024236] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.864694] kauditd_printk_skb: 47 callbacks suppressed
	[Dec17 00:07] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.006904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +2.048755] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +4.030595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +8.447143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[ +16.382404] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000015] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[Dec17 00:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	
	
	==> etcd [b98d64bd3caefb0035676aabc5fbee03686a3b5ca437f08b924204994e77f714] <==
	{"level":"warn","ts":"2025-12-17T00:42:21.564529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:21.570955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:21.577109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:21.583148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:21.589370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:21.604151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:21.610724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:21.617532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:21.623971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:21.630276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:21.636711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:21.644312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:21.651267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:21.658120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:21.664287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:21.670682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:21.688180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:21.691348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:21.697488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:21.703541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:21.751106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41432","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T00:42:43.958778Z","caller":"traceutil/trace.go:172","msg":"trace[469300836] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"123.859729ms","start":"2025-12-17T00:42:43.834897Z","end":"2025-12-17T00:42:43.958757Z","steps":["trace[469300836] 'process raft request'  (duration: 116.414062ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:42:43.958772Z","caller":"traceutil/trace.go:172","msg":"trace[784444087] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"120.824209ms","start":"2025-12-17T00:42:43.837927Z","end":"2025-12-17T00:42:43.958751Z","steps":["trace[784444087] 'process raft request'  (duration: 120.749245ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:42:44.264676Z","caller":"traceutil/trace.go:172","msg":"trace[1960757986] transaction","detail":"{read_only:false; response_revision:422; number_of_response:1; }","duration":"106.100002ms","start":"2025-12-17T00:42:44.158553Z","end":"2025-12-17T00:42:44.264653Z","steps":["trace[1960757986] 'process raft request'  (duration: 105.953789ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:42:44.369356Z","caller":"traceutil/trace.go:172","msg":"trace[1637644726] transaction","detail":"{read_only:false; response_revision:423; number_of_response:1; }","duration":"101.109146ms","start":"2025-12-17T00:42:44.268226Z","end":"2025-12-17T00:42:44.369336Z","steps":["trace[1637644726] 'process raft request'  (duration: 55.636582ms)","trace[1637644726] 'compare'  (duration: 45.354929ms)"],"step_count":2}
	
	
	==> kernel <==
	 00:42:53 up  1:25,  0 user,  load average: 4.10, 2.91, 1.94
	Linux embed-certs-153232 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8a8030c6215b60c0e9aa75529bb31530e02526910c6c7b8680fc839b1c8ee68a] <==
	I1217 00:42:30.958782       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 00:42:30.959117       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1217 00:42:30.959262       1 main.go:148] setting mtu 1500 for CNI 
	I1217 00:42:30.959288       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 00:42:30.959302       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T00:42:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 00:42:31.158048       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 00:42:31.158339       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 00:42:31.158386       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 00:42:31.158523       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 00:42:31.459378       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 00:42:31.459414       1 metrics.go:72] Registering metrics
	I1217 00:42:31.459533       1 controller.go:711] "Syncing nftables rules"
	I1217 00:42:41.158077       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 00:42:41.158190       1 main.go:301] handling current node
	I1217 00:42:51.161543       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 00:42:51.161586       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b957ef241567a2bc96307705014de0f87555e5577407ae73972c64dbd824eca3] <==
	I1217 00:42:22.213870       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 00:42:22.228949       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 00:42:22.230328       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 00:42:22.230375       1 aggregator.go:171] initial CRD sync complete...
	I1217 00:42:22.230386       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 00:42:22.230393       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 00:42:22.230399       1 cache.go:39] Caches are synced for autoregister controller
	I1217 00:42:23.110835       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1217 00:42:23.114809       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1217 00:42:23.114825       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 00:42:23.561939       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 00:42:23.597736       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 00:42:23.715580       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 00:42:23.721638       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1217 00:42:23.722828       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 00:42:23.727067       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 00:42:24.141504       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 00:42:24.763431       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 00:42:24.773904       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 00:42:24.781754       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 00:42:29.793589       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 00:42:30.094949       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:42:30.098310       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:42:30.244066       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1217 00:42:51.870192       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:54936: use of closed network connection
	
	
	==> kube-controller-manager [11c39a5e43473126e17000233d16aafbc27c8da169d514df72f858bdaada9695] <==
	I1217 00:42:29.140467       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1217 00:42:29.140498       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1217 00:42:29.140584       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1217 00:42:29.140669       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-153232"
	I1217 00:42:29.140674       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 00:42:29.140729       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 00:42:29.140831       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 00:42:29.140856       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 00:42:29.140713       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1217 00:42:29.140891       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 00:42:29.140499       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1217 00:42:29.140925       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 00:42:29.141028       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1217 00:42:29.141111       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 00:42:29.141155       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 00:42:29.141224       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1217 00:42:29.141378       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 00:42:29.141577       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 00:42:29.144066       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 00:42:29.146684       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 00:42:29.148853       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 00:42:29.151436       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 00:42:29.157757       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 00:42:29.170225       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 00:42:44.266123       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4eec4722c72f2a1c722035286b98656d0889a51f39176854c68aa4cdeab98e32] <==
	I1217 00:42:30.763589       1 server_linux.go:53] "Using iptables proxy"
	I1217 00:42:30.881764       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 00:42:30.983057       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 00:42:30.983096       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1217 00:42:30.983183       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 00:42:31.003615       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 00:42:31.003673       1 server_linux.go:132] "Using iptables Proxier"
	I1217 00:42:31.010093       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 00:42:31.010465       1 server.go:527] "Version info" version="v1.34.2"
	I1217 00:42:31.010539       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:42:31.015440       1 config.go:200] "Starting service config controller"
	I1217 00:42:31.015483       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 00:42:31.015560       1 config.go:106] "Starting endpoint slice config controller"
	I1217 00:42:31.015592       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 00:42:31.018862       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 00:42:31.018873       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 00:42:31.016175       1 config.go:309] "Starting node config controller"
	I1217 00:42:31.018889       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 00:42:31.018895       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 00:42:31.019360       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 00:42:31.019465       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 00:42:31.116311       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e873d9eb77a1f1ce02c36a4d9c50f39a94ce80fd099215bbf292423cf9b4f8a1] <==
	E1217 00:42:22.158507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 00:42:22.158561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 00:42:22.158606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 00:42:22.158649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 00:42:22.158663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 00:42:22.158692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 00:42:22.158708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 00:42:22.158709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 00:42:22.158725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 00:42:22.158763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 00:42:22.159057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 00:42:22.159078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 00:42:22.984012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 00:42:22.993161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 00:42:23.033757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 00:42:23.056983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 00:42:23.108312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 00:42:23.121308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 00:42:23.149632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 00:42:23.219396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 00:42:23.235556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 00:42:23.242560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 00:42:23.251570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 00:42:23.357174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1217 00:42:23.756443       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 00:42:25 embed-certs-153232 kubelet[1325]: I1217 00:42:25.688623    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-153232" podStartSLOduration=1.688599236 podStartE2EDuration="1.688599236s" podCreationTimestamp="2025-12-17 00:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:42:25.688438395 +0000 UTC m=+1.169519508" watchObservedRunningTime="2025-12-17 00:42:25.688599236 +0000 UTC m=+1.169680350"
	Dec 17 00:42:25 embed-certs-153232 kubelet[1325]: I1217 00:42:25.688851    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-153232" podStartSLOduration=1.688835971 podStartE2EDuration="1.688835971s" podCreationTimestamp="2025-12-17 00:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:42:25.678727311 +0000 UTC m=+1.159808425" watchObservedRunningTime="2025-12-17 00:42:25.688835971 +0000 UTC m=+1.169917065"
	Dec 17 00:42:25 embed-certs-153232 kubelet[1325]: I1217 00:42:25.697403    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-153232" podStartSLOduration=1.697387547 podStartE2EDuration="1.697387547s" podCreationTimestamp="2025-12-17 00:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:42:25.697353386 +0000 UTC m=+1.178434495" watchObservedRunningTime="2025-12-17 00:42:25.697387547 +0000 UTC m=+1.178468657"
	Dec 17 00:42:25 embed-certs-153232 kubelet[1325]: I1217 00:42:25.717252    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-153232" podStartSLOduration=1.717235843 podStartE2EDuration="1.717235843s" podCreationTimestamp="2025-12-17 00:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:42:25.706187252 +0000 UTC m=+1.187268368" watchObservedRunningTime="2025-12-17 00:42:25.717235843 +0000 UTC m=+1.198316948"
	Dec 17 00:42:29 embed-certs-153232 kubelet[1325]: I1217 00:42:29.137968    1325 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 17 00:42:29 embed-certs-153232 kubelet[1325]: I1217 00:42:29.138747    1325 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 17 00:42:30 embed-certs-153232 kubelet[1325]: I1217 00:42:30.329642    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68026912-6bcc-4aee-b806-51f967dc200f-xtables-lock\") pod \"kube-proxy-82b8k\" (UID: \"68026912-6bcc-4aee-b806-51f967dc200f\") " pod="kube-system/kube-proxy-82b8k"
	Dec 17 00:42:30 embed-certs-153232 kubelet[1325]: I1217 00:42:30.329693    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kxrb\" (UniqueName: \"kubernetes.io/projected/68026912-6bcc-4aee-b806-51f967dc200f-kube-api-access-6kxrb\") pod \"kube-proxy-82b8k\" (UID: \"68026912-6bcc-4aee-b806-51f967dc200f\") " pod="kube-system/kube-proxy-82b8k"
	Dec 17 00:42:30 embed-certs-153232 kubelet[1325]: I1217 00:42:30.329732    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h8fb\" (UniqueName: \"kubernetes.io/projected/f06f5d73-eef9-4876-b0aa-862d58c18777-kube-api-access-7h8fb\") pod \"kindnet-zffzt\" (UID: \"f06f5d73-eef9-4876-b0aa-862d58c18777\") " pod="kube-system/kindnet-zffzt"
	Dec 17 00:42:30 embed-certs-153232 kubelet[1325]: I1217 00:42:30.329762    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/68026912-6bcc-4aee-b806-51f967dc200f-kube-proxy\") pod \"kube-proxy-82b8k\" (UID: \"68026912-6bcc-4aee-b806-51f967dc200f\") " pod="kube-system/kube-proxy-82b8k"
	Dec 17 00:42:30 embed-certs-153232 kubelet[1325]: I1217 00:42:30.329785    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68026912-6bcc-4aee-b806-51f967dc200f-lib-modules\") pod \"kube-proxy-82b8k\" (UID: \"68026912-6bcc-4aee-b806-51f967dc200f\") " pod="kube-system/kube-proxy-82b8k"
	Dec 17 00:42:30 embed-certs-153232 kubelet[1325]: I1217 00:42:30.329805    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f06f5d73-eef9-4876-b0aa-862d58c18777-cni-cfg\") pod \"kindnet-zffzt\" (UID: \"f06f5d73-eef9-4876-b0aa-862d58c18777\") " pod="kube-system/kindnet-zffzt"
	Dec 17 00:42:30 embed-certs-153232 kubelet[1325]: I1217 00:42:30.329829    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f06f5d73-eef9-4876-b0aa-862d58c18777-xtables-lock\") pod \"kindnet-zffzt\" (UID: \"f06f5d73-eef9-4876-b0aa-862d58c18777\") " pod="kube-system/kindnet-zffzt"
	Dec 17 00:42:30 embed-certs-153232 kubelet[1325]: I1217 00:42:30.329850    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f06f5d73-eef9-4876-b0aa-862d58c18777-lib-modules\") pod \"kindnet-zffzt\" (UID: \"f06f5d73-eef9-4876-b0aa-862d58c18777\") " pod="kube-system/kindnet-zffzt"
	Dec 17 00:42:31 embed-certs-153232 kubelet[1325]: I1217 00:42:31.671769    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-zffzt" podStartSLOduration=1.6717473379999999 podStartE2EDuration="1.671747338s" podCreationTimestamp="2025-12-17 00:42:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:42:31.671640111 +0000 UTC m=+7.152721224" watchObservedRunningTime="2025-12-17 00:42:31.671747338 +0000 UTC m=+7.152828455"
	Dec 17 00:42:34 embed-certs-153232 kubelet[1325]: I1217 00:42:34.998084    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-82b8k" podStartSLOduration=4.998049362 podStartE2EDuration="4.998049362s" podCreationTimestamp="2025-12-17 00:42:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:42:31.680862568 +0000 UTC m=+7.161943682" watchObservedRunningTime="2025-12-17 00:42:34.998049362 +0000 UTC m=+10.479130476"
	Dec 17 00:42:41 embed-certs-153232 kubelet[1325]: I1217 00:42:41.327918    1325 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 17 00:42:41 embed-certs-153232 kubelet[1325]: I1217 00:42:41.418785    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aedf434b-e03e-479c-a8f2-199e28231d61-config-volume\") pod \"coredns-66bc5c9577-vtspd\" (UID: \"aedf434b-e03e-479c-a8f2-199e28231d61\") " pod="kube-system/coredns-66bc5c9577-vtspd"
	Dec 17 00:42:41 embed-certs-153232 kubelet[1325]: I1217 00:42:41.418852    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ad4a1982-2da6-490d-bcba-f04782d2d9b8-tmp\") pod \"storage-provisioner\" (UID: \"ad4a1982-2da6-490d-bcba-f04782d2d9b8\") " pod="kube-system/storage-provisioner"
	Dec 17 00:42:41 embed-certs-153232 kubelet[1325]: I1217 00:42:41.418885    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b48t\" (UniqueName: \"kubernetes.io/projected/ad4a1982-2da6-490d-bcba-f04782d2d9b8-kube-api-access-7b48t\") pod \"storage-provisioner\" (UID: \"ad4a1982-2da6-490d-bcba-f04782d2d9b8\") " pod="kube-system/storage-provisioner"
	Dec 17 00:42:41 embed-certs-153232 kubelet[1325]: I1217 00:42:41.418913    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr2nj\" (UniqueName: \"kubernetes.io/projected/aedf434b-e03e-479c-a8f2-199e28231d61-kube-api-access-lr2nj\") pod \"coredns-66bc5c9577-vtspd\" (UID: \"aedf434b-e03e-479c-a8f2-199e28231d61\") " pod="kube-system/coredns-66bc5c9577-vtspd"
	Dec 17 00:42:42 embed-certs-153232 kubelet[1325]: I1217 00:42:42.745794    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vtspd" podStartSLOduration=12.745769548 podStartE2EDuration="12.745769548s" podCreationTimestamp="2025-12-17 00:42:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:42:42.7080729 +0000 UTC m=+18.189154016" watchObservedRunningTime="2025-12-17 00:42:42.745769548 +0000 UTC m=+18.226850665"
	Dec 17 00:42:42 embed-certs-153232 kubelet[1325]: I1217 00:42:42.794031    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.794006088 podStartE2EDuration="12.794006088s" podCreationTimestamp="2025-12-17 00:42:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:42:42.789018517 +0000 UTC m=+18.270099629" watchObservedRunningTime="2025-12-17 00:42:42.794006088 +0000 UTC m=+18.275087197"
	Dec 17 00:42:44 embed-certs-153232 kubelet[1325]: I1217 00:42:44.845446    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9shlm\" (UniqueName: \"kubernetes.io/projected/2ded7c57-a893-4051-8499-a73941ba914b-kube-api-access-9shlm\") pod \"busybox\" (UID: \"2ded7c57-a893-4051-8499-a73941ba914b\") " pod="default/busybox"
	Dec 17 00:42:46 embed-certs-153232 kubelet[1325]: I1217 00:42:46.709284    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.054438366 podStartE2EDuration="2.709260658s" podCreationTimestamp="2025-12-17 00:42:44 +0000 UTC" firstStartedPulling="2025-12-17 00:42:45.069733386 +0000 UTC m=+20.550814479" lastFinishedPulling="2025-12-17 00:42:45.724555663 +0000 UTC m=+21.205636771" observedRunningTime="2025-12-17 00:42:46.70893253 +0000 UTC m=+22.190013644" watchObservedRunningTime="2025-12-17 00:42:46.709260658 +0000 UTC m=+22.190341772"
	
	
	==> storage-provisioner [28c45b755476beb75db823c532ff38ff18b2f8311bc785cb99118abed1b184f4] <==
	I1217 00:42:41.766620       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 00:42:41.783066       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 00:42:41.783459       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 00:42:41.786642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:41.792937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 00:42:41.793210       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 00:42:41.793920       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-153232_551d0745-f2fc-4161-a10d-5bf7f7bd6ed3!
	I1217 00:42:41.793871       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"edc5b1f6-fb4f-4962-9502-23926c96ec27", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-153232_551d0745-f2fc-4161-a10d-5bf7f7bd6ed3 became leader
	W1217 00:42:41.796327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:41.802219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 00:42:41.895327       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-153232_551d0745-f2fc-4161-a10d-5bf7f7bd6ed3!
	W1217 00:42:43.835061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:43.960777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:45.965270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:45.969782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:47.973394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:47.977141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:49.980957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:49.988569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:51.993744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:52.028825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-153232 -n embed-certs-153232
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-153232 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-653717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-653717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (239.520205ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:43:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-653717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-653717
helpers_test.go:244: (dbg) docker inspect newest-cni-653717:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "beff396f1ecf0ad7988c26d13bbede7e2b58ac17c04e57fcb9bdf8cdfddcf41e",
	        "Created": "2025-12-17T00:42:44.576413898Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 293987,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:42:44.615617204Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/beff396f1ecf0ad7988c26d13bbede7e2b58ac17c04e57fcb9bdf8cdfddcf41e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/beff396f1ecf0ad7988c26d13bbede7e2b58ac17c04e57fcb9bdf8cdfddcf41e/hostname",
	        "HostsPath": "/var/lib/docker/containers/beff396f1ecf0ad7988c26d13bbede7e2b58ac17c04e57fcb9bdf8cdfddcf41e/hosts",
	        "LogPath": "/var/lib/docker/containers/beff396f1ecf0ad7988c26d13bbede7e2b58ac17c04e57fcb9bdf8cdfddcf41e/beff396f1ecf0ad7988c26d13bbede7e2b58ac17c04e57fcb9bdf8cdfddcf41e-json.log",
	        "Name": "/newest-cni-653717",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-653717:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-653717",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "beff396f1ecf0ad7988c26d13bbede7e2b58ac17c04e57fcb9bdf8cdfddcf41e",
	                "LowerDir": "/var/lib/docker/overlay2/b3d705a839526a196f0f1ae4bd0a8c2a9760f4aba6266e16997c71c4dc1dfa7d-init/diff:/var/lib/docker/overlay2/594b812fd6d8db89dab322ea9e00d43dd555e9709fb5e6953e3873cce717392c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3d705a839526a196f0f1ae4bd0a8c2a9760f4aba6266e16997c71c4dc1dfa7d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3d705a839526a196f0f1ae4bd0a8c2a9760f4aba6266e16997c71c4dc1dfa7d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3d705a839526a196f0f1ae4bd0a8c2a9760f4aba6266e16997c71c4dc1dfa7d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-653717",
	                "Source": "/var/lib/docker/volumes/newest-cni-653717/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-653717",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-653717",
	                "name.minikube.sigs.k8s.io": "newest-cni-653717",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fac6d6bcf874a6fefd7b4ef11c03c0b11fe50d8f71b73bab089d5f4bbe677fa0",
	            "SandboxKey": "/var/run/docker/netns/fac6d6bcf874",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-653717": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "978c2526e91c5a0b699851fa3eca8542bfa74ada0d698e43a470cd47adc72c7d",
	                    "EndpointID": "edbf7fd6ec14d542ea0a246cbf68b992367a1a4e624d008585d9386c55080a68",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "3e:70:67:df:f3:23",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-653717",
	                        "beff396f1ecf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-653717 -n newest-cni-653717
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-653717 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ stop    │ -p old-k8s-version-742860 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:41 UTC │
	│ delete  │ -p stopped-upgrade-028618                                                                                                                                                                                                                            │ stopped-upgrade-028618       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:41 UTC │
	│ start   │ -p no-preload-864613 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-742860 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:41 UTC │
	│ start   │ -p old-k8s-version-742860 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p cert-expiration-753607 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                            │ cert-expiration-753607       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:41 UTC │
	│ delete  │ -p cert-expiration-753607                                                                                                                                                                                                                            │ cert-expiration-753607       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p embed-certs-153232 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-803959    │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ start   │ -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-803959    │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ delete  │ -p kubernetes-upgrade-803959                                                                                                                                                                                                                         │ kubernetes-upgrade-803959    │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ delete  │ -p disable-driver-mounts-827138                                                                                                                                                                                                                      │ disable-driver-mounts-827138 │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p default-k8s-diff-port-414413 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ addons  │ enable metrics-server -p no-preload-864613 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ stop    │ -p no-preload-864613 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ image   │ old-k8s-version-742860 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ pause   │ -p old-k8s-version-742860 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-864613 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p no-preload-864613 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ delete  │ -p old-k8s-version-742860                                                                                                                                                                                                                            │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ delete  │ -p old-k8s-version-742860                                                                                                                                                                                                                            │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p newest-cni-653717 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-153232 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ stop    │ -p embed-certs-153232 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-653717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:42:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:42:39.551621  292081 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:42:39.551901  292081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:42:39.551911  292081 out.go:374] Setting ErrFile to fd 2...
	I1217 00:42:39.551915  292081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:42:39.552166  292081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:42:39.552662  292081 out.go:368] Setting JSON to false
	I1217 00:42:39.553726  292081 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5109,"bootTime":1765927050,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:42:39.553780  292081 start.go:143] virtualization: kvm guest
	I1217 00:42:39.555553  292081 out.go:179] * [newest-cni-653717] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:42:39.556746  292081 notify.go:221] Checking for updates...
	I1217 00:42:39.556769  292081 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:42:39.557949  292081 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:42:39.559133  292081 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:42:39.560242  292081 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:42:39.561274  292081 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:42:39.563103  292081 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:42:39.564577  292081 config.go:182] Loaded profile config "default-k8s-diff-port-414413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:42:39.564675  292081 config.go:182] Loaded profile config "embed-certs-153232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:42:39.564782  292081 config.go:182] Loaded profile config "no-preload-864613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:42:39.564899  292081 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:42:39.590554  292081 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:42:39.590699  292081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:42:39.656559  292081 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 00:42:39.646099494 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:42:39.656663  292081 docker.go:319] overlay module found
	I1217 00:42:39.659088  292081 out.go:179] * Using the docker driver based on user configuration
	I1217 00:42:39.660142  292081 start.go:309] selected driver: docker
	I1217 00:42:39.660155  292081 start.go:927] validating driver "docker" against <nil>
	I1217 00:42:39.660166  292081 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:42:39.660774  292081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:42:39.722518  292081 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 00:42:39.711146936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:42:39.722723  292081 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 00:42:39.722757  292081 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 00:42:39.723072  292081 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 00:42:39.726391  292081 out.go:179] * Using Docker driver with root privileges
	I1217 00:42:39.727427  292081 cni.go:84] Creating CNI manager for ""
	I1217 00:42:39.727511  292081 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:42:39.727530  292081 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 00:42:39.727629  292081 start.go:353] cluster config:
	{Name:newest-cni-653717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-653717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:42:39.729007  292081 out.go:179] * Starting "newest-cni-653717" primary control-plane node in "newest-cni-653717" cluster
	I1217 00:42:39.729981  292081 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:42:39.731716  292081 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:42:39.732745  292081 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:42:39.732775  292081 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1217 00:42:39.732795  292081 cache.go:65] Caching tarball of preloaded images
	I1217 00:42:39.732856  292081 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:42:39.732901  292081 preload.go:238] Found /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 00:42:39.732916  292081 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1217 00:42:39.733047  292081 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/config.json ...
	I1217 00:42:39.733072  292081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/config.json: {Name:mkc027815a15326496ab2408383e384558a71cb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:39.754922  292081 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:42:39.754940  292081 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:42:39.754960  292081 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:42:39.755026  292081 start.go:360] acquireMachinesLock for newest-cni-653717: {Name:mk721025c3a21068c756325b281b92cea9d9d432 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:42:39.755136  292081 start.go:364] duration metric: took 91.503µs to acquireMachinesLock for "newest-cni-653717"
	I1217 00:42:39.755162  292081 start.go:93] Provisioning new machine with config: &{Name:newest-cni-653717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-653717 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:42:39.755339  292081 start.go:125] createHost starting for "" (driver="docker")
	I1217 00:42:34.990123  290128 out.go:252] * Restarting existing docker container for "no-preload-864613" ...
	I1217 00:42:34.990194  290128 cli_runner.go:164] Run: docker start no-preload-864613
	I1217 00:42:35.271109  290128 cli_runner.go:164] Run: docker container inspect no-preload-864613 --format={{.State.Status}}
	I1217 00:42:35.293226  290128 kic.go:430] container "no-preload-864613" state is running.
	I1217 00:42:35.293636  290128 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-864613
	I1217 00:42:35.318368  290128 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/config.json ...
	I1217 00:42:35.318647  290128 machine.go:94] provisionDockerMachine start ...
	I1217 00:42:35.318739  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:35.345305  290128 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:35.345550  290128 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1217 00:42:35.345563  290128 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:42:35.346319  290128 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48788->127.0.0.1:33083: read: connection reset by peer
	I1217 00:42:38.479735  290128 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-864613
	
	I1217 00:42:38.479761  290128 ubuntu.go:182] provisioning hostname "no-preload-864613"
	I1217 00:42:38.479822  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:38.498770  290128 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:38.499115  290128 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1217 00:42:38.499136  290128 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-864613 && echo "no-preload-864613" | sudo tee /etc/hostname
	I1217 00:42:38.637351  290128 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-864613
	
	I1217 00:42:38.637440  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:38.658230  290128 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:38.658487  290128 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1217 00:42:38.658515  290128 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-864613' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-864613/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-864613' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:42:38.788384  290128 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:42:38.788407  290128 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:42:38.788434  290128 ubuntu.go:190] setting up certificates
	I1217 00:42:38.788447  290128 provision.go:84] configureAuth start
	I1217 00:42:38.788515  290128 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-864613
	I1217 00:42:38.807940  290128 provision.go:143] copyHostCerts
	I1217 00:42:38.808027  290128 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:42:38.808047  290128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:42:38.808122  290128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:42:38.808261  290128 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:42:38.808286  290128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:42:38.808332  290128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:42:38.808431  290128 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:42:38.808442  290128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:42:38.808491  290128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:42:38.808580  290128 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.no-preload-864613 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-864613]
	I1217 00:42:38.892180  290128 provision.go:177] copyRemoteCerts
	I1217 00:42:38.892238  290128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:42:38.892281  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:38.911079  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:39.005310  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:42:39.023102  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:42:39.040760  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:42:39.058614  290128 provision.go:87] duration metric: took 270.146931ms to configureAuth
	I1217 00:42:39.058640  290128 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:42:39.058823  290128 config.go:182] Loaded profile config "no-preload-864613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:42:39.058943  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:39.077523  290128 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:39.077804  290128 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1217 00:42:39.077831  290128 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:42:39.439563  290128 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:42:39.439588  290128 machine.go:97] duration metric: took 4.120922822s to provisionDockerMachine
	I1217 00:42:39.439652  290128 start.go:293] postStartSetup for "no-preload-864613" (driver="docker")
	I1217 00:42:39.439674  290128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:42:39.439737  290128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:42:39.439779  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:39.458833  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:39.556864  290128 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:42:39.560960  290128 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:42:39.560985  290128 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:42:39.561021  290128 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:42:39.561074  290128 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:42:39.561192  290128 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:42:39.561332  290128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:42:39.569440  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:42:39.589195  290128 start.go:296] duration metric: took 149.524862ms for postStartSetup
	I1217 00:42:39.589264  290128 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:42:39.589306  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:39.611378  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:39.706544  290128 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:42:39.711286  290128 fix.go:56] duration metric: took 4.742462188s for fixHost
	I1217 00:42:39.711308  290128 start.go:83] releasing machines lock for "no-preload-864613", held for 4.742503801s
	I1217 00:42:39.711366  290128 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-864613
	I1217 00:42:39.731529  290128 ssh_runner.go:195] Run: cat /version.json
	I1217 00:42:39.731581  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:39.731644  290128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:42:39.731702  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:39.751519  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:39.752129  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:38.606421  284412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:39.107158  284412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:39.606578  284412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:40.107068  284412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:40.607475  284412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:41.106671  284412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:41.191185  284412 kubeadm.go:1114] duration metric: took 4.657497583s to wait for elevateKubeSystemPrivileges
	I1217 00:42:41.191228  284412 kubeadm.go:403] duration metric: took 15.676954898s to StartCluster
	I1217 00:42:41.191250  284412 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:41.191326  284412 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:42:41.193393  284412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:41.193647  284412 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:42:41.193813  284412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 00:42:41.193845  284412 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:42:41.193954  284412 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-414413"
	I1217 00:42:41.193968  284412 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-414413"
	I1217 00:42:41.193986  284412 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-414413"
	I1217 00:42:41.194024  284412 config.go:182] Loaded profile config "default-k8s-diff-port-414413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:42:41.194065  284412 host.go:66] Checking if "default-k8s-diff-port-414413" exists ...
	I1217 00:42:41.193999  284412 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-414413"
	I1217 00:42:41.194464  284412 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Status}}
	I1217 00:42:41.194643  284412 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Status}}
	I1217 00:42:41.199234  284412 out.go:179] * Verifying Kubernetes components...
	I1217 00:42:41.200842  284412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:41.223659  284412 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:42:39.842716  290128 ssh_runner.go:195] Run: systemctl --version
	I1217 00:42:39.901651  290128 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:42:39.939255  290128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:42:39.944540  290128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:42:39.944621  290128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:42:39.953110  290128 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:42:39.953139  290128 start.go:496] detecting cgroup driver to use...
	I1217 00:42:39.953172  290128 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:42:39.953213  290128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:42:39.969396  290128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:42:39.983260  290128 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:42:39.983311  290128 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:42:40.004183  290128 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:42:40.024650  290128 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:42:40.129134  290128 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:42:40.230889  290128 docker.go:234] disabling docker service ...
	I1217 00:42:40.230963  290128 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:42:40.249697  290128 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:42:40.263284  290128 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:42:40.366858  290128 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:42:40.456091  290128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:42:40.475522  290128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:42:40.491549  290128 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:42:40.491607  290128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:40.501573  290128 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:42:40.501637  290128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:40.512350  290128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:40.521014  290128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:40.529858  290128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:42:40.539315  290128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:40.548317  290128 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:40.556545  290128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:40.565397  290128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:42:40.572624  290128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:42:40.580318  290128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:40.683030  290128 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:42:41.204474  290128 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:42:41.204534  290128 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:42:41.212044  290128 start.go:564] Will wait 60s for crictl version
	I1217 00:42:41.212169  290128 ssh_runner.go:195] Run: which crictl
	I1217 00:42:41.218032  290128 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:42:41.259772  290128 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:42:41.259870  290128 ssh_runner.go:195] Run: crio --version
	I1217 00:42:41.301952  290128 ssh_runner.go:195] Run: crio --version
	I1217 00:42:41.350647  290128 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1217 00:42:41.224598  284412 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-414413"
	I1217 00:42:41.224648  284412 host.go:66] Checking if "default-k8s-diff-port-414413" exists ...
	I1217 00:42:41.224951  284412 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:42:41.224967  284412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:42:41.225071  284412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:42:41.225154  284412 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Status}}
	I1217 00:42:41.259312  284412 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:42:41.259335  284412 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:42:41.259392  284412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:42:41.263451  284412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:42:41.284213  284412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:42:41.318678  284412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 00:42:41.383381  284412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:42:41.407448  284412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:42:41.422819  284412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:42:41.574977  284412 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1217 00:42:41.576809  284412 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-414413" to be "Ready" ...
	I1217 00:42:41.847773  284412 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 00:42:41.351979  290128 cli_runner.go:164] Run: docker network inspect no-preload-864613 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:42:41.382246  290128 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 00:42:41.388749  290128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:42:41.407129  290128 kubeadm.go:884] updating cluster {Name:no-preload-864613 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-864613 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:42:41.407300  290128 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:42:41.407365  290128 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:42:41.460347  290128 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:42:41.461086  290128 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:42:41.461107  290128 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1217 00:42:41.461250  290128 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-864613 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-864613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:42:41.461345  290128 ssh_runner.go:195] Run: crio config
	I1217 00:42:41.540745  290128 cni.go:84] Creating CNI manager for ""
	I1217 00:42:41.540776  290128 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:42:41.540795  290128 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:42:41.540825  290128 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-864613 NodeName:no-preload-864613 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:42:41.541050  290128 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-864613"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:42:41.541130  290128 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:42:41.552225  290128 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:42:41.552302  290128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:42:41.563587  290128 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1217 00:42:41.582296  290128 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:42:41.601245  290128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1217 00:42:41.618842  290128 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:42:41.627097  290128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:42:41.640575  290128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:41.779494  290128 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:42:41.808680  290128 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613 for IP: 192.168.103.2
	I1217 00:42:41.808703  290128 certs.go:195] generating shared ca certs ...
	I1217 00:42:41.808722  290128 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:41.808901  290128 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:42:41.808964  290128 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:42:41.808977  290128 certs.go:257] generating profile certs ...
	I1217 00:42:41.809120  290128 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/client.key
	I1217 00:42:41.809192  290128 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/apiserver.key.74439f26
	I1217 00:42:41.809257  290128 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/proxy-client.key
	I1217 00:42:41.809398  290128 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:42:41.809440  290128 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:42:41.809456  290128 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:42:41.809498  290128 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:42:41.809536  290128 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:42:41.809574  290128 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:42:41.809636  290128 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:42:41.810241  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:42:41.835907  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:42:41.859930  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:42:41.882138  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:42:41.912524  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:42:41.936204  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 00:42:41.956213  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:42:41.975723  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:42:41.996532  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:42:42.014588  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:42:42.033145  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:42:42.050882  290128 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:42:42.063166  290128 ssh_runner.go:195] Run: openssl version
	I1217 00:42:42.069209  290128 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:42.078973  290128 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:42:42.087312  290128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:42.091173  290128 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:42.091229  290128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:42.127714  290128 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:42:42.135865  290128 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16354.pem
	I1217 00:42:42.144324  290128 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16354.pem /etc/ssl/certs/16354.pem
	I1217 00:42:42.151722  290128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16354.pem
	I1217 00:42:42.155308  290128 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:13 /usr/share/ca-certificates/16354.pem
	I1217 00:42:42.155360  290128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16354.pem
	I1217 00:42:42.194654  290128 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:42:42.204167  290128 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:42:42.212026  290128 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:42:42.219563  290128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:42:42.223628  290128 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:42:42.223682  290128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	I1217 00:42:42.276803  290128 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:42:42.285639  290128 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:42:42.290281  290128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:42:42.328285  290128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:42:42.379314  290128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:42:42.430181  290128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:42:42.491936  290128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:42:42.551969  290128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:42:42.595853  290128 kubeadm.go:401] StartCluster: {Name:no-preload-864613 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-864613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:42:42.595977  290128 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:42:42.596064  290128 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:42:42.632158  290128 cri.go:89] found id: "4b34ed74185a723d1987fd893c6b89aa61e85dd77a4391ea83bf44f5d07a0931"
	I1217 00:42:42.632183  290128 cri.go:89] found id: "a590d671bfa52ffb77f09298e606dd5a6cef506d25bf7c749bd516cf65fabaab"
	I1217 00:42:42.632191  290128 cri.go:89] found id: "a12cf220a059b218df62a14f9045f72149c1009f3507c8c36e206fdf43dc9d57"
	I1217 00:42:42.632202  290128 cri.go:89] found id: "d592a6ba05b7b5e2d53ffd9b29510a47348394c0b8faf29e99d49dce869dbeff"
	I1217 00:42:42.632208  290128 cri.go:89] found id: ""
	I1217 00:42:42.632258  290128 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 00:42:42.648079  290128 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:42:42Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:42:42.648152  290128 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:42:42.659743  290128 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:42:42.659831  290128 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:42:42.659957  290128 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:42:42.670583  290128 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:42:42.671843  290128 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-864613" does not appear in /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:42:42.672610  290128 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-12816/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-864613" cluster setting kubeconfig missing "no-preload-864613" context setting]
	I1217 00:42:42.673849  290128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:42.676258  290128 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:42:42.685491  290128 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1217 00:42:42.685528  290128 kubeadm.go:602] duration metric: took 25.615797ms to restartPrimaryControlPlane
	I1217 00:42:42.685540  290128 kubeadm.go:403] duration metric: took 89.695231ms to StartCluster
	I1217 00:42:42.685558  290128 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:42.685612  290128 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:42:42.687715  290128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:42.687977  290128 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:42:42.688235  290128 config.go:182] Loaded profile config "no-preload-864613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:42:42.688305  290128 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:42:42.688442  290128 addons.go:70] Setting storage-provisioner=true in profile "no-preload-864613"
	I1217 00:42:42.688465  290128 addons.go:239] Setting addon storage-provisioner=true in "no-preload-864613"
	I1217 00:42:42.688466  290128 addons.go:70] Setting dashboard=true in profile "no-preload-864613"
	W1217 00:42:42.688473  290128 addons.go:248] addon storage-provisioner should already be in state true
	I1217 00:42:42.688487  290128 addons.go:70] Setting default-storageclass=true in profile "no-preload-864613"
	I1217 00:42:42.688491  290128 addons.go:239] Setting addon dashboard=true in "no-preload-864613"
	W1217 00:42:42.688504  290128 addons.go:248] addon dashboard should already be in state true
	I1217 00:42:42.688508  290128 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-864613"
	I1217 00:42:42.688534  290128 host.go:66] Checking if "no-preload-864613" exists ...
	I1217 00:42:42.688565  290128 host.go:66] Checking if "no-preload-864613" exists ...
	I1217 00:42:42.688902  290128 cli_runner.go:164] Run: docker container inspect no-preload-864613 --format={{.State.Status}}
	I1217 00:42:42.689014  290128 cli_runner.go:164] Run: docker container inspect no-preload-864613 --format={{.State.Status}}
	I1217 00:42:42.689031  290128 cli_runner.go:164] Run: docker container inspect no-preload-864613 --format={{.State.Status}}
	I1217 00:42:42.696229  290128 out.go:179] * Verifying Kubernetes components...
	I1217 00:42:42.698403  290128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:42.723527  290128 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 00:42:42.724785  290128 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 00:42:42.725948  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 00:42:42.726058  290128 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 00:42:42.726130  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:42.727005  290128 addons.go:239] Setting addon default-storageclass=true in "no-preload-864613"
	W1217 00:42:42.727021  290128 addons.go:248] addon default-storageclass should already be in state true
	I1217 00:42:42.727055  290128 host.go:66] Checking if "no-preload-864613" exists ...
	I1217 00:42:42.727489  290128 cli_runner.go:164] Run: docker container inspect no-preload-864613 --format={{.State.Status}}
	I1217 00:42:42.730761  290128 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1217 00:42:39.742533  280822 node_ready.go:57] node "embed-certs-153232" has "Ready":"False" status (will retry)
	I1217 00:42:41.749292  280822 node_ready.go:49] node "embed-certs-153232" is "Ready"
	I1217 00:42:41.749331  280822 node_ready.go:38] duration metric: took 11.010585734s for node "embed-certs-153232" to be "Ready" ...
	I1217 00:42:41.749349  280822 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:42:41.749405  280822 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:41.774188  280822 api_server.go:72] duration metric: took 11.489358576s to wait for apiserver process to appear ...
	I1217 00:42:41.774225  280822 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:42:41.774250  280822 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 00:42:41.783349  280822 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 00:42:41.784553  280822 api_server.go:141] control plane version: v1.34.2
	I1217 00:42:41.784584  280822 api_server.go:131] duration metric: took 10.351149ms to wait for apiserver health ...
	I1217 00:42:41.784596  280822 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:42:41.788662  280822 system_pods.go:59] 8 kube-system pods found
	I1217 00:42:41.788701  280822 system_pods.go:61] "coredns-66bc5c9577-vtspd" [aedf434b-e03e-479c-a8f2-199e28231d61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:41.788711  280822 system_pods.go:61] "etcd-embed-certs-153232" [68a7a631-c79e-48d1-bd8d-1aafc2b61fcc] Running
	I1217 00:42:41.788718  280822 system_pods.go:61] "kindnet-zffzt" [f06f5d73-eef9-4876-b0aa-862d58c18777] Running
	I1217 00:42:41.788724  280822 system_pods.go:61] "kube-apiserver-embed-certs-153232" [a0a484be-31c5-4471-b35c-7d059d9e1b00] Running
	I1217 00:42:41.788736  280822 system_pods.go:61] "kube-controller-manager-embed-certs-153232" [6fd01afb-bd8e-450b-9082-310ff94c5958] Running
	I1217 00:42:41.788741  280822 system_pods.go:61] "kube-proxy-82b8k" [68026912-6bcc-4aee-b806-51f967dc200f] Running
	I1217 00:42:41.788746  280822 system_pods.go:61] "kube-scheduler-embed-certs-153232" [af854f70-8bef-44c5-ad64-197a3282d5c3] Running
	I1217 00:42:41.788794  280822 system_pods.go:61] "storage-provisioner" [ad4a1982-2da6-490d-bcba-f04782d2d9b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:41.788808  280822 system_pods.go:74] duration metric: took 4.204985ms to wait for pod list to return data ...
	I1217 00:42:41.788822  280822 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:42:41.793561  280822 default_sa.go:45] found service account: "default"
	I1217 00:42:41.793587  280822 default_sa.go:55] duration metric: took 4.758694ms for default service account to be created ...
	I1217 00:42:41.793600  280822 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 00:42:41.889984  280822 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:41.890037  280822 system_pods.go:89] "coredns-66bc5c9577-vtspd" [aedf434b-e03e-479c-a8f2-199e28231d61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:41.890046  280822 system_pods.go:89] "etcd-embed-certs-153232" [68a7a631-c79e-48d1-bd8d-1aafc2b61fcc] Running
	I1217 00:42:41.890056  280822 system_pods.go:89] "kindnet-zffzt" [f06f5d73-eef9-4876-b0aa-862d58c18777] Running
	I1217 00:42:41.890063  280822 system_pods.go:89] "kube-apiserver-embed-certs-153232" [a0a484be-31c5-4471-b35c-7d059d9e1b00] Running
	I1217 00:42:41.890073  280822 system_pods.go:89] "kube-controller-manager-embed-certs-153232" [6fd01afb-bd8e-450b-9082-310ff94c5958] Running
	I1217 00:42:41.890078  280822 system_pods.go:89] "kube-proxy-82b8k" [68026912-6bcc-4aee-b806-51f967dc200f] Running
	I1217 00:42:41.890085  280822 system_pods.go:89] "kube-scheduler-embed-certs-153232" [af854f70-8bef-44c5-ad64-197a3282d5c3] Running
	I1217 00:42:41.890095  280822 system_pods.go:89] "storage-provisioner" [ad4a1982-2da6-490d-bcba-f04782d2d9b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:41.890123  280822 retry.go:31] will retry after 248.746676ms: missing components: kube-dns
	I1217 00:42:42.142494  280822 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:42.142524  280822 system_pods.go:89] "coredns-66bc5c9577-vtspd" [aedf434b-e03e-479c-a8f2-199e28231d61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:42.142538  280822 system_pods.go:89] "etcd-embed-certs-153232" [68a7a631-c79e-48d1-bd8d-1aafc2b61fcc] Running
	I1217 00:42:42.142546  280822 system_pods.go:89] "kindnet-zffzt" [f06f5d73-eef9-4876-b0aa-862d58c18777] Running
	I1217 00:42:42.142550  280822 system_pods.go:89] "kube-apiserver-embed-certs-153232" [a0a484be-31c5-4471-b35c-7d059d9e1b00] Running
	I1217 00:42:42.142554  280822 system_pods.go:89] "kube-controller-manager-embed-certs-153232" [6fd01afb-bd8e-450b-9082-310ff94c5958] Running
	I1217 00:42:42.142557  280822 system_pods.go:89] "kube-proxy-82b8k" [68026912-6bcc-4aee-b806-51f967dc200f] Running
	I1217 00:42:42.142560  280822 system_pods.go:89] "kube-scheduler-embed-certs-153232" [af854f70-8bef-44c5-ad64-197a3282d5c3] Running
	I1217 00:42:42.142565  280822 system_pods.go:89] "storage-provisioner" [ad4a1982-2da6-490d-bcba-f04782d2d9b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:42.142577  280822 retry.go:31] will retry after 366.812444ms: missing components: kube-dns
	I1217 00:42:42.514253  280822 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:42.514281  280822 system_pods.go:89] "coredns-66bc5c9577-vtspd" [aedf434b-e03e-479c-a8f2-199e28231d61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:42.514287  280822 system_pods.go:89] "etcd-embed-certs-153232" [68a7a631-c79e-48d1-bd8d-1aafc2b61fcc] Running
	I1217 00:42:42.514293  280822 system_pods.go:89] "kindnet-zffzt" [f06f5d73-eef9-4876-b0aa-862d58c18777] Running
	I1217 00:42:42.514296  280822 system_pods.go:89] "kube-apiserver-embed-certs-153232" [a0a484be-31c5-4471-b35c-7d059d9e1b00] Running
	I1217 00:42:42.514300  280822 system_pods.go:89] "kube-controller-manager-embed-certs-153232" [6fd01afb-bd8e-450b-9082-310ff94c5958] Running
	I1217 00:42:42.514304  280822 system_pods.go:89] "kube-proxy-82b8k" [68026912-6bcc-4aee-b806-51f967dc200f] Running
	I1217 00:42:42.514307  280822 system_pods.go:89] "kube-scheduler-embed-certs-153232" [af854f70-8bef-44c5-ad64-197a3282d5c3] Running
	I1217 00:42:42.514312  280822 system_pods.go:89] "storage-provisioner" [ad4a1982-2da6-490d-bcba-f04782d2d9b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:42.514399  280822 retry.go:31] will retry after 333.656577ms: missing components: kube-dns
	I1217 00:42:42.853133  280822 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:42.853164  280822 system_pods.go:89] "coredns-66bc5c9577-vtspd" [aedf434b-e03e-479c-a8f2-199e28231d61] Running
	I1217 00:42:42.853172  280822 system_pods.go:89] "etcd-embed-certs-153232" [68a7a631-c79e-48d1-bd8d-1aafc2b61fcc] Running
	I1217 00:42:42.853177  280822 system_pods.go:89] "kindnet-zffzt" [f06f5d73-eef9-4876-b0aa-862d58c18777] Running
	I1217 00:42:42.853183  280822 system_pods.go:89] "kube-apiserver-embed-certs-153232" [a0a484be-31c5-4471-b35c-7d059d9e1b00] Running
	I1217 00:42:42.853190  280822 system_pods.go:89] "kube-controller-manager-embed-certs-153232" [6fd01afb-bd8e-450b-9082-310ff94c5958] Running
	I1217 00:42:42.853195  280822 system_pods.go:89] "kube-proxy-82b8k" [68026912-6bcc-4aee-b806-51f967dc200f] Running
	I1217 00:42:42.853200  280822 system_pods.go:89] "kube-scheduler-embed-certs-153232" [af854f70-8bef-44c5-ad64-197a3282d5c3] Running
	I1217 00:42:42.853205  280822 system_pods.go:89] "storage-provisioner" [ad4a1982-2da6-490d-bcba-f04782d2d9b8] Running
	I1217 00:42:42.853214  280822 system_pods.go:126] duration metric: took 1.059606129s to wait for k8s-apps to be running ...
	I1217 00:42:42.853227  280822 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 00:42:42.853279  280822 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:42:42.869286  280822 system_svc.go:56] duration metric: took 16.049777ms WaitForService to wait for kubelet
	I1217 00:42:42.869316  280822 kubeadm.go:587] duration metric: took 12.584493992s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:42:42.869340  280822 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:42:42.872567  280822 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 00:42:42.872595  280822 node_conditions.go:123] node cpu capacity is 8
	I1217 00:42:42.872609  280822 node_conditions.go:105] duration metric: took 3.264541ms to run NodePressure ...
	I1217 00:42:42.872621  280822 start.go:242] waiting for startup goroutines ...
	I1217 00:42:42.872628  280822 start.go:247] waiting for cluster config update ...
	I1217 00:42:42.872641  280822 start.go:256] writing updated cluster config ...
	I1217 00:42:42.872974  280822 ssh_runner.go:195] Run: rm -f paused
	I1217 00:42:42.877546  280822 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:42:42.881940  280822 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vtspd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:42.886950  280822 pod_ready.go:94] pod "coredns-66bc5c9577-vtspd" is "Ready"
	I1217 00:42:42.886970  280822 pod_ready.go:86] duration metric: took 4.999829ms for pod "coredns-66bc5c9577-vtspd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:42.889527  280822 pod_ready.go:83] waiting for pod "etcd-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:42.895847  280822 pod_ready.go:94] pod "etcd-embed-certs-153232" is "Ready"
	I1217 00:42:42.895869  280822 pod_ready.go:86] duration metric: took 6.325871ms for pod "etcd-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:42.898281  280822 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:42.902688  280822 pod_ready.go:94] pod "kube-apiserver-embed-certs-153232" is "Ready"
	I1217 00:42:42.902710  280822 pod_ready.go:86] duration metric: took 4.408331ms for pod "kube-apiserver-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:42.905039  280822 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:41.849058  284412 addons.go:530] duration metric: took 655.212128ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 00:42:42.080776  284412 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-414413" context rescaled to 1 replicas
	I1217 00:42:43.281597  280822 pod_ready.go:94] pod "kube-controller-manager-embed-certs-153232" is "Ready"
	I1217 00:42:43.281626  280822 pod_ready.go:86] duration metric: took 376.5674ms for pod "kube-controller-manager-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:43.484610  280822 pod_ready.go:83] waiting for pod "kube-proxy-82b8k" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:43.960602  280822 pod_ready.go:94] pod "kube-proxy-82b8k" is "Ready"
	I1217 00:42:43.960650  280822 pod_ready.go:86] duration metric: took 476.012578ms for pod "kube-proxy-82b8k" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:44.099686  280822 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:44.482807  280822 pod_ready.go:94] pod "kube-scheduler-embed-certs-153232" is "Ready"
	I1217 00:42:44.482862  280822 pod_ready.go:86] duration metric: took 383.141625ms for pod "kube-scheduler-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:44.482879  280822 pod_ready.go:40] duration metric: took 1.605302389s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:42:44.546591  280822 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1217 00:42:44.548075  280822 out.go:179] * Done! kubectl is now configured to use "embed-certs-153232" cluster and "default" namespace by default
	I1217 00:42:39.757771  292081 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 00:42:39.758042  292081 start.go:159] libmachine.API.Create for "newest-cni-653717" (driver="docker")
	I1217 00:42:39.758083  292081 client.go:173] LocalClient.Create starting
	I1217 00:42:39.758162  292081 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem
	I1217 00:42:39.758203  292081 main.go:143] libmachine: Decoding PEM data...
	I1217 00:42:39.758225  292081 main.go:143] libmachine: Parsing certificate...
	I1217 00:42:39.758288  292081 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem
	I1217 00:42:39.758312  292081 main.go:143] libmachine: Decoding PEM data...
	I1217 00:42:39.758329  292081 main.go:143] libmachine: Parsing certificate...
	I1217 00:42:39.758773  292081 cli_runner.go:164] Run: docker network inspect newest-cni-653717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 00:42:39.776750  292081 cli_runner.go:211] docker network inspect newest-cni-653717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 00:42:39.776824  292081 network_create.go:284] running [docker network inspect newest-cni-653717] to gather additional debugging logs...
	I1217 00:42:39.776846  292081 cli_runner.go:164] Run: docker network inspect newest-cni-653717
	W1217 00:42:39.795539  292081 cli_runner.go:211] docker network inspect newest-cni-653717 returned with exit code 1
	I1217 00:42:39.795568  292081 network_create.go:287] error running [docker network inspect newest-cni-653717]: docker network inspect newest-cni-653717: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-653717 not found
	I1217 00:42:39.795583  292081 network_create.go:289] output of [docker network inspect newest-cni-653717]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-653717 not found
	
	** /stderr **
	I1217 00:42:39.795681  292081 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:42:39.813581  292081 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ffd1d738f01 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3d:52:75:47:82} reservation:<nil>}
	I1217 00:42:39.814315  292081 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-280edd437675 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:ae:02:b5:f9:a6} reservation:<nil>}
	I1217 00:42:39.815124  292081 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9f28d049043c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:3f:8e:e9:44:56} reservation:<nil>}
	I1217 00:42:39.815715  292081 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a57026acfc12 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:aa:e6:32:39:49:3b} reservation:<nil>}
	I1217 00:42:39.816283  292081 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-a0b8f164bc66 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ae:bf:0f:c2:a1:7a} reservation:<nil>}
	I1217 00:42:39.817094  292081 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed8b70}
	I1217 00:42:39.817124  292081 network_create.go:124] attempt to create docker network newest-cni-653717 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1217 00:42:39.817179  292081 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-653717 newest-cni-653717
	I1217 00:42:39.867249  292081 network_create.go:108] docker network newest-cni-653717 192.168.94.0/24 created
	I1217 00:42:39.867283  292081 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-653717" container
	I1217 00:42:39.867363  292081 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 00:42:39.884952  292081 cli_runner.go:164] Run: docker volume create newest-cni-653717 --label name.minikube.sigs.k8s.io=newest-cni-653717 --label created_by.minikube.sigs.k8s.io=true
	I1217 00:42:39.903653  292081 oci.go:103] Successfully created a docker volume newest-cni-653717
	I1217 00:42:39.903740  292081 cli_runner.go:164] Run: docker run --rm --name newest-cni-653717-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-653717 --entrypoint /usr/bin/test -v newest-cni-653717:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 00:42:40.332097  292081 oci.go:107] Successfully prepared a docker volume newest-cni-653717
	I1217 00:42:40.332180  292081 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:42:40.332197  292081 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 00:42:40.332280  292081 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-653717:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 00:42:44.481201  292081 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-653717:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.148853331s)
	I1217 00:42:44.481236  292081 kic.go:203] duration metric: took 4.149035302s to extract preloaded images to volume ...
	W1217 00:42:44.481343  292081 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 00:42:44.481388  292081 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 00:42:44.481435  292081 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 00:42:42.731891  290128 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:42:42.731907  290128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:42:42.731955  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:42.763570  290128 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:42:42.763694  290128 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:42:42.763796  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:42.767548  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:42.770701  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:42.798221  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:42.895839  290128 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:42:42.898131  290128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:42:42.919421  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 00:42:42.919446  290128 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 00:42:42.924414  290128 node_ready.go:35] waiting up to 6m0s for node "no-preload-864613" to be "Ready" ...
	I1217 00:42:42.925969  290128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:42:42.957244  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 00:42:42.957271  290128 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 00:42:42.992919  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 00:42:42.992940  290128 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 00:42:43.014226  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 00:42:43.014254  290128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 00:42:43.030103  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 00:42:43.030126  290128 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 00:42:43.045016  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 00:42:43.045040  290128 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 00:42:43.058207  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 00:42:43.058229  290128 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 00:42:43.073567  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 00:42:43.073591  290128 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 00:42:43.089409  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 00:42:43.089435  290128 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 00:42:43.104309  290128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 00:42:43.888759  290128 node_ready.go:49] node "no-preload-864613" is "Ready"
	I1217 00:42:43.888791  290128 node_ready.go:38] duration metric: took 964.340322ms for node "no-preload-864613" to be "Ready" ...
	I1217 00:42:43.888806  290128 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:42:43.888858  290128 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:44.730253  290128 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.832086371s)
	I1217 00:42:44.730302  290128 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.804306557s)
	I1217 00:42:44.730394  290128 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.626038593s)
	I1217 00:42:44.730467  290128 api_server.go:72] duration metric: took 2.042440177s to wait for apiserver process to appear ...
	I1217 00:42:44.730494  290128 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:42:44.730534  290128 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:42:44.732808  290128 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-864613 addons enable metrics-server
	
	I1217 00:42:44.736310  290128 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 00:42:44.736333  290128 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 00:42:44.739032  290128 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1217 00:42:44.740068  290128 addons.go:530] duration metric: took 2.051763832s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 00:42:44.554887  292081 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-653717 --name newest-cni-653717 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-653717 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-653717 --network newest-cni-653717 --ip 192.168.94.2 --volume newest-cni-653717:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 00:42:44.859934  292081 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Running}}
	I1217 00:42:44.879772  292081 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:42:44.902759  292081 cli_runner.go:164] Run: docker exec newest-cni-653717 stat /var/lib/dpkg/alternatives/iptables
	I1217 00:42:44.958456  292081 oci.go:144] the created container "newest-cni-653717" has a running status.
	I1217 00:42:44.958499  292081 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa...
	I1217 00:42:45.146969  292081 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 00:42:45.178425  292081 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:42:45.205673  292081 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 00:42:45.205749  292081 kic_runner.go:114] Args: [docker exec --privileged newest-cni-653717 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 00:42:45.272222  292081 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:42:45.302920  292081 machine.go:94] provisionDockerMachine start ...
	I1217 00:42:45.303079  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:45.332494  292081 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:45.332879  292081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1217 00:42:45.332905  292081 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:42:45.470045  292081 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-653717
	
	I1217 00:42:45.470072  292081 ubuntu.go:182] provisioning hostname "newest-cni-653717"
	I1217 00:42:45.470145  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:45.489669  292081 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:45.489903  292081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1217 00:42:45.489921  292081 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-653717 && echo "newest-cni-653717" | sudo tee /etc/hostname
	I1217 00:42:45.644161  292081 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-653717
	
	I1217 00:42:45.644290  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:45.670660  292081 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:45.670959  292081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1217 00:42:45.671001  292081 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-653717' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-653717/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-653717' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:42:45.810630  292081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:42:45.810662  292081 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:42:45.810686  292081 ubuntu.go:190] setting up certificates
	I1217 00:42:45.810696  292081 provision.go:84] configureAuth start
	I1217 00:42:45.810765  292081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-653717
	I1217 00:42:45.829459  292081 provision.go:143] copyHostCerts
	I1217 00:42:45.829525  292081 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:42:45.829539  292081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:42:45.829631  292081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:42:45.829741  292081 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:42:45.829751  292081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:42:45.829780  292081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:42:45.829850  292081 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:42:45.829858  292081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:42:45.829882  292081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:42:45.829934  292081 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.newest-cni-653717 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-653717]
	I1217 00:42:45.958055  292081 provision.go:177] copyRemoteCerts
	I1217 00:42:45.958127  292081 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:42:45.958174  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:45.984112  292081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:42:46.086012  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:42:46.104624  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:42:46.121927  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:42:46.138837  292081 provision.go:87] duration metric: took 328.114013ms to configureAuth
	I1217 00:42:46.138862  292081 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:42:46.139071  292081 config.go:182] Loaded profile config "newest-cni-653717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:42:46.139186  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:46.157087  292081 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:46.157347  292081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1217 00:42:46.157376  292081 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:42:46.424454  292081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:42:46.424492  292081 machine.go:97] duration metric: took 1.121525305s to provisionDockerMachine
	I1217 00:42:46.424503  292081 client.go:176] duration metric: took 6.666411162s to LocalClient.Create
	I1217 00:42:46.424518  292081 start.go:167] duration metric: took 6.666478769s to libmachine.API.Create "newest-cni-653717"
	I1217 00:42:46.424527  292081 start.go:293] postStartSetup for "newest-cni-653717" (driver="docker")
	I1217 00:42:46.424540  292081 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:42:46.424592  292081 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:42:46.424624  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:46.442796  292081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:42:46.536618  292081 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:42:46.540051  292081 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:42:46.540072  292081 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:42:46.540082  292081 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:42:46.540139  292081 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:42:46.540216  292081 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:42:46.540306  292081 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:42:46.547511  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:42:46.567395  292081 start.go:296] duration metric: took 142.85649ms for postStartSetup
	I1217 00:42:46.567722  292081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-653717
	I1217 00:42:46.586027  292081 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/config.json ...
	I1217 00:42:46.586297  292081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:42:46.586350  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:46.604529  292081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:42:46.695141  292081 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:42:46.700409  292081 start.go:128] duration metric: took 6.945052111s to createHost
	I1217 00:42:46.700434  292081 start.go:83] releasing machines lock for "newest-cni-653717", held for 6.94528556s
	I1217 00:42:46.700506  292081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-653717
	I1217 00:42:46.719971  292081 ssh_runner.go:195] Run: cat /version.json
	I1217 00:42:46.720049  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:46.720057  292081 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:42:46.720124  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:46.738390  292081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:42:46.738747  292081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:42:46.882381  292081 ssh_runner.go:195] Run: systemctl --version
	I1217 00:42:46.888882  292081 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:42:46.924064  292081 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:42:46.928655  292081 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:42:46.928703  292081 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:42:46.953084  292081 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 00:42:46.953107  292081 start.go:496] detecting cgroup driver to use...
	I1217 00:42:46.953139  292081 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:42:46.953190  292081 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:42:46.969605  292081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:42:46.981627  292081 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:42:46.981696  292081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:42:46.997969  292081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:42:47.015481  292081 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:42:47.102372  292081 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:42:47.195858  292081 docker.go:234] disabling docker service ...
	I1217 00:42:47.195927  292081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:42:47.214755  292081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:42:47.228327  292081 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:42:47.313282  292081 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:42:47.402263  292081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:42:47.415123  292081 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:42:47.429297  292081 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:42:47.429343  292081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:47.439140  292081 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:42:47.439181  292081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:47.447551  292081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:47.456120  292081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:47.464976  292081 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:42:47.472532  292081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:47.480749  292081 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:47.494427  292081 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:47.502977  292081 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:42:47.510020  292081 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:42:47.517810  292081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:47.603008  292081 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:42:47.760326  292081 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:42:47.760395  292081 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:42:47.764833  292081 start.go:564] Will wait 60s for crictl version
	I1217 00:42:47.764898  292081 ssh_runner.go:195] Run: which crictl
	I1217 00:42:47.768771  292081 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:42:47.794033  292081 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:42:47.794112  292081 ssh_runner.go:195] Run: crio --version
	I1217 00:42:47.825452  292081 ssh_runner.go:195] Run: crio --version
	I1217 00:42:47.857636  292081 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1217 00:42:47.858656  292081 cli_runner.go:164] Run: docker network inspect newest-cni-653717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:42:47.876540  292081 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1217 00:42:47.880551  292081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:42:47.891708  292081 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1217 00:42:43.579985  284412 node_ready.go:57] node "default-k8s-diff-port-414413" has "Ready":"False" status (will retry)
	W1217 00:42:45.580315  284412 node_ready.go:57] node "default-k8s-diff-port-414413" has "Ready":"False" status (will retry)
	W1217 00:42:47.580369  284412 node_ready.go:57] node "default-k8s-diff-port-414413" has "Ready":"False" status (will retry)
	I1217 00:42:47.892665  292081 kubeadm.go:884] updating cluster {Name:newest-cni-653717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-653717 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:42:47.892819  292081 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:42:47.892873  292081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:42:47.922682  292081 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:42:47.922702  292081 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:42:47.922742  292081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:42:47.948548  292081 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:42:47.948566  292081 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:42:47.948572  292081 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1217 00:42:47.948644  292081 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-653717 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-653717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:42:47.948706  292081 ssh_runner.go:195] Run: crio config
	I1217 00:42:47.998076  292081 cni.go:84] Creating CNI manager for ""
	I1217 00:42:47.998103  292081 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:42:47.998123  292081 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 00:42:47.998153  292081 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-653717 NodeName:newest-cni-653717 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:42:47.998316  292081 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-653717"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:42:47.998384  292081 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:42:48.008451  292081 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:42:48.008505  292081 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:42:48.018068  292081 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1217 00:42:48.032092  292081 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:42:48.046963  292081 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1217 00:42:48.058965  292081 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:42:48.062632  292081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:42:48.072208  292081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:48.155827  292081 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:42:48.181149  292081 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717 for IP: 192.168.94.2
	I1217 00:42:48.181168  292081 certs.go:195] generating shared ca certs ...
	I1217 00:42:48.181185  292081 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:48.181315  292081 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:42:48.181355  292081 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:42:48.181365  292081 certs.go:257] generating profile certs ...
	I1217 00:42:48.181431  292081 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/client.key
	I1217 00:42:48.181455  292081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/client.crt with IP's: []
	I1217 00:42:48.204435  292081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/client.crt ...
	I1217 00:42:48.204457  292081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/client.crt: {Name:mk706a547645679cf593c6b6b64a5b13d6509c3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:48.204624  292081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/client.key ...
	I1217 00:42:48.204643  292081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/client.key: {Name:mk2afcb3a7b31c81f1f103ac537112f286b679a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:48.204746  292081 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.key.17c07d81
	I1217 00:42:48.204762  292081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.crt.17c07d81 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1217 00:42:48.250524  292081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.crt.17c07d81 ...
	I1217 00:42:48.250546  292081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.crt.17c07d81: {Name:mk9b44a0d7e2e4ebfad604c15171baaa270cfc11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:48.250684  292081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.key.17c07d81 ...
	I1217 00:42:48.250696  292081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.key.17c07d81: {Name:mk49169f7d724cca6994caea611fcf0ceba24cbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:48.250766  292081 certs.go:382] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.crt.17c07d81 -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.crt
	I1217 00:42:48.250832  292081 certs.go:386] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.key.17c07d81 -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.key
	I1217 00:42:48.250890  292081 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.key
	I1217 00:42:48.250905  292081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.crt with IP's: []
	I1217 00:42:48.311073  292081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.crt ...
	I1217 00:42:48.311096  292081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.crt: {Name:mk191e919f78ff769818c78eee7f416c2b6c7966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:48.311228  292081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.key ...
	I1217 00:42:48.311240  292081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.key: {Name:mk28eaa7bd38fac072b93b2b9e0af2cc79a6b0d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:48.311403  292081 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:42:48.311447  292081 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:42:48.311462  292081 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:42:48.311499  292081 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:42:48.311527  292081 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:42:48.311550  292081 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:42:48.311593  292081 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:42:48.312140  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:42:48.330195  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:42:48.346815  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:42:48.363297  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:42:48.380352  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:42:48.396979  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 00:42:48.413283  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:42:48.429487  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:42:48.446870  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:42:48.465676  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:42:48.482658  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:42:48.499956  292081 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:42:48.511878  292081 ssh_runner.go:195] Run: openssl version
	I1217 00:42:48.517834  292081 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:42:48.524686  292081 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:42:48.531652  292081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:42:48.535189  292081 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:42:48.535244  292081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	I1217 00:42:48.572541  292081 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:42:48.580505  292081 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/163542.pem /etc/ssl/certs/3ec20f2e.0
	I1217 00:42:48.587403  292081 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:48.594664  292081 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:42:48.602161  292081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:48.605749  292081 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:48.605792  292081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:48.647175  292081 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:42:48.654933  292081 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 00:42:48.662929  292081 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16354.pem
	I1217 00:42:48.670361  292081 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16354.pem /etc/ssl/certs/16354.pem
	I1217 00:42:48.677425  292081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16354.pem
	I1217 00:42:48.680914  292081 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:13 /usr/share/ca-certificates/16354.pem
	I1217 00:42:48.680965  292081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16354.pem
	I1217 00:42:48.717977  292081 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:42:48.725584  292081 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16354.pem /etc/ssl/certs/51391683.0
	I1217 00:42:48.733342  292081 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:42:48.737247  292081 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 00:42:48.737296  292081 kubeadm.go:401] StartCluster: {Name:newest-cni-653717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-653717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:42:48.737379  292081 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:42:48.737429  292081 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:42:48.766866  292081 cri.go:89] found id: ""
	I1217 00:42:48.766920  292081 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:42:48.775388  292081 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:42:48.784570  292081 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:42:48.784637  292081 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:42:48.794346  292081 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:42:48.794366  292081 kubeadm.go:158] found existing configuration files:
	
	I1217 00:42:48.794414  292081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 00:42:48.804623  292081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:42:48.804684  292081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:42:48.814188  292081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 00:42:48.824205  292081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:42:48.824260  292081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:42:48.833632  292081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 00:42:48.843633  292081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:42:48.843687  292081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:42:48.852733  292081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 00:42:48.863156  292081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:42:48.863217  292081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:42:48.871629  292081 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:42:48.918628  292081 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 00:42:48.918706  292081 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:42:49.012576  292081 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:42:49.012694  292081 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 00:42:49.012779  292081 kubeadm.go:319] OS: Linux
	I1217 00:42:49.012850  292081 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:42:49.012934  292081 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:42:49.012981  292081 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:42:49.013068  292081 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:42:49.013147  292081 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:42:49.013231  292081 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:42:49.013306  292081 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:42:49.013350  292081 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 00:42:49.079170  292081 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:42:49.079317  292081 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:42:49.079463  292081 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:42:49.089927  292081 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:42:49.092958  292081 out.go:252]   - Generating certificates and keys ...
	I1217 00:42:49.093071  292081 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:42:49.093173  292081 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:42:49.222054  292081 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 00:42:49.258747  292081 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 00:42:49.400834  292081 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 00:42:49.535425  292081 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 00:42:45.231179  290128 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:42:45.239052  290128 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1217 00:42:45.240536  290128 api_server.go:141] control plane version: v1.35.0-beta.0
	I1217 00:42:45.240626  290128 api_server.go:131] duration metric: took 510.122414ms to wait for apiserver health ...
	I1217 00:42:45.240667  290128 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:42:45.245445  290128 system_pods.go:59] 8 kube-system pods found
	I1217 00:42:45.245514  290128 system_pods.go:61] "coredns-7d764666f9-6ql6r" [7fe29911-eb02-4cea-b42b-254fe65a4e65] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:45.245533  290128 system_pods.go:61] "etcd-no-preload-864613" [2cd02c45-52c1-43f0-8160-939b70247653] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 00:42:45.245594  290128 system_pods.go:61] "kindnet-bpf4x" [0b42df61-fef2-41ff-83f3-0abede84a5fb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 00:42:45.245612  290128 system_pods.go:61] "kube-apiserver-no-preload-864613" [039d37cf-0e0f-45fa-9d35-a0a4deb68c2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 00:42:45.245619  290128 system_pods.go:61] "kube-controller-manager-no-preload-864613" [bb99a38a-1b12-43f0-b562-96bca9e3f8fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 00:42:45.245625  290128 system_pods.go:61] "kube-proxy-2kddk" [7153c193-9583-4abd-a828-ec1dc91151e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 00:42:45.245630  290128 system_pods.go:61] "kube-scheduler-no-preload-864613" [10f61f47-8e53-41ce-b820-7e662dd29fcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 00:42:45.245675  290128 system_pods.go:61] "storage-provisioner" [bf26b73d-473d-43a0-bf42-4d69abdd9e31] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:45.245778  290128 system_pods.go:74] duration metric: took 5.02268ms to wait for pod list to return data ...
	I1217 00:42:45.245808  290128 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:42:45.249871  290128 default_sa.go:45] found service account: "default"
	I1217 00:42:45.249896  290128 default_sa.go:55] duration metric: took 4.070194ms for default service account to be created ...
	I1217 00:42:45.249909  290128 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 00:42:45.254534  290128 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:45.254572  290128 system_pods.go:89] "coredns-7d764666f9-6ql6r" [7fe29911-eb02-4cea-b42b-254fe65a4e65] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:45.254585  290128 system_pods.go:89] "etcd-no-preload-864613" [2cd02c45-52c1-43f0-8160-939b70247653] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 00:42:45.254594  290128 system_pods.go:89] "kindnet-bpf4x" [0b42df61-fef2-41ff-83f3-0abede84a5fb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 00:42:45.254603  290128 system_pods.go:89] "kube-apiserver-no-preload-864613" [039d37cf-0e0f-45fa-9d35-a0a4deb68c2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 00:42:45.254612  290128 system_pods.go:89] "kube-controller-manager-no-preload-864613" [bb99a38a-1b12-43f0-b562-96bca9e3f8fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 00:42:45.254620  290128 system_pods.go:89] "kube-proxy-2kddk" [7153c193-9583-4abd-a828-ec1dc91151e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 00:42:45.254634  290128 system_pods.go:89] "kube-scheduler-no-preload-864613" [10f61f47-8e53-41ce-b820-7e662dd29fcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 00:42:45.254647  290128 system_pods.go:89] "storage-provisioner" [bf26b73d-473d-43a0-bf42-4d69abdd9e31] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:45.254656  290128 system_pods.go:126] duration metric: took 4.73972ms to wait for k8s-apps to be running ...
	I1217 00:42:45.254666  290128 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 00:42:45.254716  290128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:42:45.275259  290128 system_svc.go:56] duration metric: took 20.587102ms WaitForService to wait for kubelet
	I1217 00:42:45.275297  290128 kubeadm.go:587] duration metric: took 2.587270544s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:42:45.275318  290128 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:42:45.285140  290128 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 00:42:45.285176  290128 node_conditions.go:123] node cpu capacity is 8
	I1217 00:42:45.285194  290128 node_conditions.go:105] duration metric: took 9.870357ms to run NodePressure ...
	I1217 00:42:45.285208  290128 start.go:242] waiting for startup goroutines ...
	I1217 00:42:45.285219  290128 start.go:247] waiting for cluster config update ...
	I1217 00:42:45.285233  290128 start.go:256] writing updated cluster config ...
	I1217 00:42:45.285542  290128 ssh_runner.go:195] Run: rm -f paused
	I1217 00:42:45.292170  290128 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:42:45.297160  290128 pod_ready.go:83] waiting for pod "coredns-7d764666f9-6ql6r" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 00:42:47.302980  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	W1217 00:42:49.303419  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	I1217 00:42:49.875900  292081 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 00:42:49.876099  292081 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-653717] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 00:42:49.960694  292081 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 00:42:49.960901  292081 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-653717] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 00:42:49.986333  292081 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 00:42:50.038475  292081 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 00:42:50.210231  292081 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 00:42:50.210371  292081 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:42:50.371871  292081 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:42:50.467844  292081 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:42:50.524877  292081 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:42:50.559110  292081 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:42:50.627240  292081 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:42:50.627953  292081 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:42:50.635874  292081 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1217 00:42:50.080796  284412 node_ready.go:57] node "default-k8s-diff-port-414413" has "Ready":"False" status (will retry)
	W1217 00:42:52.081883  284412 node_ready.go:57] node "default-k8s-diff-port-414413" has "Ready":"False" status (will retry)
	I1217 00:42:52.580309  284412 node_ready.go:49] node "default-k8s-diff-port-414413" is "Ready"
	I1217 00:42:52.580348  284412 node_ready.go:38] duration metric: took 11.003228991s for node "default-k8s-diff-port-414413" to be "Ready" ...
	I1217 00:42:52.580365  284412 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:42:52.580426  284412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:52.601898  284412 api_server.go:72] duration metric: took 11.408207765s to wait for apiserver process to appear ...
	I1217 00:42:52.601924  284412 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:42:52.601942  284412 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1217 00:42:52.614364  284412 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1217 00:42:52.619775  284412 api_server.go:141] control plane version: v1.34.2
	I1217 00:42:52.619802  284412 api_server.go:131] duration metric: took 17.87077ms to wait for apiserver health ...
	I1217 00:42:52.619820  284412 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:42:52.628928  284412 system_pods.go:59] 8 kube-system pods found
	I1217 00:42:52.629032  284412 system_pods.go:61] "coredns-66bc5c9577-v76f4" [1370bcd6-f828-4ed0-af58-d2d87c7044bd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:52.629078  284412 system_pods.go:61] "etcd-default-k8s-diff-port-414413" [286460a9-8a6c-4939-a2a0-0d5b31620d9a] Running
	I1217 00:42:52.629104  284412 system_pods.go:61] "kindnet-hxhbf" [a4c2ed1b-ad48-484e-b779-4b93f3a72d0b] Running
	I1217 00:42:52.629119  284412 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-414413" [aa792fc5-63c2-4287-802e-c99c70a9ab2d] Running
	I1217 00:42:52.629134  284412 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-414413" [e9a02305-5b73-4867-8605-48c8202cf5dd] Running
	I1217 00:42:52.629148  284412 system_pods.go:61] "kube-proxy-prlkw" [9a4571d0-7682-4838-aeb3-ccb4480157b8] Running
	I1217 00:42:52.629162  284412 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-414413" [a71da427-5b35-43f4-827b-62a96fdfda42] Running
	I1217 00:42:52.629194  284412 system_pods.go:61] "storage-provisioner" [0405b749-23a9-4449-90ac-59daf539647b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:52.629205  284412 system_pods.go:74] duration metric: took 9.377067ms to wait for pod list to return data ...
	I1217 00:42:52.629215  284412 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:42:52.632715  284412 default_sa.go:45] found service account: "default"
	I1217 00:42:52.632773  284412 default_sa.go:55] duration metric: took 3.551355ms for default service account to be created ...
	I1217 00:42:52.632809  284412 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 00:42:52.638857  284412 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:52.638886  284412 system_pods.go:89] "coredns-66bc5c9577-v76f4" [1370bcd6-f828-4ed0-af58-d2d87c7044bd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:52.638894  284412 system_pods.go:89] "etcd-default-k8s-diff-port-414413" [286460a9-8a6c-4939-a2a0-0d5b31620d9a] Running
	I1217 00:42:52.638902  284412 system_pods.go:89] "kindnet-hxhbf" [a4c2ed1b-ad48-484e-b779-4b93f3a72d0b] Running
	I1217 00:42:52.638908  284412 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-414413" [aa792fc5-63c2-4287-802e-c99c70a9ab2d] Running
	I1217 00:42:52.638914  284412 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-414413" [e9a02305-5b73-4867-8605-48c8202cf5dd] Running
	I1217 00:42:52.638919  284412 system_pods.go:89] "kube-proxy-prlkw" [9a4571d0-7682-4838-aeb3-ccb4480157b8] Running
	I1217 00:42:52.638924  284412 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-414413" [a71da427-5b35-43f4-827b-62a96fdfda42] Running
	I1217 00:42:52.638931  284412 system_pods.go:89] "storage-provisioner" [0405b749-23a9-4449-90ac-59daf539647b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:52.638959  284412 retry.go:31] will retry after 286.683057ms: missing components: kube-dns
	I1217 00:42:52.932155  284412 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:52.932199  284412 system_pods.go:89] "coredns-66bc5c9577-v76f4" [1370bcd6-f828-4ed0-af58-d2d87c7044bd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:52.932207  284412 system_pods.go:89] "etcd-default-k8s-diff-port-414413" [286460a9-8a6c-4939-a2a0-0d5b31620d9a] Running
	I1217 00:42:52.932215  284412 system_pods.go:89] "kindnet-hxhbf" [a4c2ed1b-ad48-484e-b779-4b93f3a72d0b] Running
	I1217 00:42:52.932220  284412 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-414413" [aa792fc5-63c2-4287-802e-c99c70a9ab2d] Running
	I1217 00:42:52.932233  284412 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-414413" [e9a02305-5b73-4867-8605-48c8202cf5dd] Running
	I1217 00:42:52.932238  284412 system_pods.go:89] "kube-proxy-prlkw" [9a4571d0-7682-4838-aeb3-ccb4480157b8] Running
	I1217 00:42:52.932244  284412 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-414413" [a71da427-5b35-43f4-827b-62a96fdfda42] Running
	I1217 00:42:52.932250  284412 system_pods.go:89] "storage-provisioner" [0405b749-23a9-4449-90ac-59daf539647b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:52.932266  284412 retry.go:31] will retry after 256.870822ms: missing components: kube-dns
	I1217 00:42:53.193952  284412 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:53.194020  284412 system_pods.go:89] "coredns-66bc5c9577-v76f4" [1370bcd6-f828-4ed0-af58-d2d87c7044bd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:53.194031  284412 system_pods.go:89] "etcd-default-k8s-diff-port-414413" [286460a9-8a6c-4939-a2a0-0d5b31620d9a] Running
	I1217 00:42:53.194039  284412 system_pods.go:89] "kindnet-hxhbf" [a4c2ed1b-ad48-484e-b779-4b93f3a72d0b] Running
	I1217 00:42:53.194046  284412 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-414413" [aa792fc5-63c2-4287-802e-c99c70a9ab2d] Running
	I1217 00:42:53.194052  284412 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-414413" [e9a02305-5b73-4867-8605-48c8202cf5dd] Running
	I1217 00:42:53.194061  284412 system_pods.go:89] "kube-proxy-prlkw" [9a4571d0-7682-4838-aeb3-ccb4480157b8] Running
	I1217 00:42:53.194066  284412 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-414413" [a71da427-5b35-43f4-827b-62a96fdfda42] Running
	I1217 00:42:53.194071  284412 system_pods.go:89] "storage-provisioner" [0405b749-23a9-4449-90ac-59daf539647b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:53.194090  284412 retry.go:31] will retry after 397.719289ms: missing components: kube-dns
	I1217 00:42:50.637611  292081 out.go:252]   - Booting up control plane ...
	I1217 00:42:50.637750  292081 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:42:50.637839  292081 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:42:50.638890  292081 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:42:50.658581  292081 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:42:50.658710  292081 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:42:50.668112  292081 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:42:50.668403  292081 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:42:50.668563  292081 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:42:50.812447  292081 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:42:50.812638  292081 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:42:51.313921  292081 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.552791ms
	I1217 00:42:51.316755  292081 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 00:42:51.316901  292081 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1217 00:42:51.317065  292081 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 00:42:51.317186  292081 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 00:42:52.825289  292081 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.508380882s
	I1217 00:42:54.252053  292081 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.934578503s
	W1217 00:42:51.808336  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	W1217 00:42:54.305562  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	I1217 00:42:53.597204  284412 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:53.597239  284412 system_pods.go:89] "coredns-66bc5c9577-v76f4" [1370bcd6-f828-4ed0-af58-d2d87c7044bd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:53.597249  284412 system_pods.go:89] "etcd-default-k8s-diff-port-414413" [286460a9-8a6c-4939-a2a0-0d5b31620d9a] Running
	I1217 00:42:53.597258  284412 system_pods.go:89] "kindnet-hxhbf" [a4c2ed1b-ad48-484e-b779-4b93f3a72d0b] Running
	I1217 00:42:53.597269  284412 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-414413" [aa792fc5-63c2-4287-802e-c99c70a9ab2d] Running
	I1217 00:42:53.597275  284412 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-414413" [e9a02305-5b73-4867-8605-48c8202cf5dd] Running
	I1217 00:42:53.597287  284412 system_pods.go:89] "kube-proxy-prlkw" [9a4571d0-7682-4838-aeb3-ccb4480157b8] Running
	I1217 00:42:53.597293  284412 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-414413" [a71da427-5b35-43f4-827b-62a96fdfda42] Running
	I1217 00:42:53.597299  284412 system_pods.go:89] "storage-provisioner" [0405b749-23a9-4449-90ac-59daf539647b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:53.597317  284412 retry.go:31] will retry after 468.383665ms: missing components: kube-dns
	I1217 00:42:54.073424  284412 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:54.073462  284412 system_pods.go:89] "coredns-66bc5c9577-v76f4" [1370bcd6-f828-4ed0-af58-d2d87c7044bd] Running
	I1217 00:42:54.073470  284412 system_pods.go:89] "etcd-default-k8s-diff-port-414413" [286460a9-8a6c-4939-a2a0-0d5b31620d9a] Running
	I1217 00:42:54.073477  284412 system_pods.go:89] "kindnet-hxhbf" [a4c2ed1b-ad48-484e-b779-4b93f3a72d0b] Running
	I1217 00:42:54.073482  284412 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-414413" [aa792fc5-63c2-4287-802e-c99c70a9ab2d] Running
	I1217 00:42:54.073490  284412 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-414413" [e9a02305-5b73-4867-8605-48c8202cf5dd] Running
	I1217 00:42:54.073499  284412 system_pods.go:89] "kube-proxy-prlkw" [9a4571d0-7682-4838-aeb3-ccb4480157b8] Running
	I1217 00:42:54.073505  284412 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-414413" [a71da427-5b35-43f4-827b-62a96fdfda42] Running
	I1217 00:42:54.073517  284412 system_pods.go:89] "storage-provisioner" [0405b749-23a9-4449-90ac-59daf539647b] Running
	I1217 00:42:54.073526  284412 system_pods.go:126] duration metric: took 1.440699726s to wait for k8s-apps to be running ...
	I1217 00:42:54.073542  284412 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 00:42:54.073591  284412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:42:54.097035  284412 system_svc.go:56] duration metric: took 23.442729ms WaitForService to wait for kubelet
	I1217 00:42:54.097064  284412 kubeadm.go:587] duration metric: took 12.903378654s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:42:54.097095  284412 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:42:54.102295  284412 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 00:42:54.102323  284412 node_conditions.go:123] node cpu capacity is 8
	I1217 00:42:54.102349  284412 node_conditions.go:105] duration metric: took 5.238889ms to run NodePressure ...
	I1217 00:42:54.102363  284412 start.go:242] waiting for startup goroutines ...
	I1217 00:42:54.102374  284412 start.go:247] waiting for cluster config update ...
	I1217 00:42:54.102387  284412 start.go:256] writing updated cluster config ...
	I1217 00:42:54.107309  284412 ssh_runner.go:195] Run: rm -f paused
	I1217 00:42:54.115453  284412 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:42:54.122663  284412 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-v76f4" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:54.128501  284412 pod_ready.go:94] pod "coredns-66bc5c9577-v76f4" is "Ready"
	I1217 00:42:54.128579  284412 pod_ready.go:86] duration metric: took 5.828834ms for pod "coredns-66bc5c9577-v76f4" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:54.132687  284412 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-414413" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:54.139385  284412 pod_ready.go:94] pod "etcd-default-k8s-diff-port-414413" is "Ready"
	I1217 00:42:54.139408  284412 pod_ready.go:86] duration metric: took 6.667735ms for pod "etcd-default-k8s-diff-port-414413" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:54.141609  284412 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-414413" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:54.145840  284412 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-414413" is "Ready"
	I1217 00:42:54.145865  284412 pod_ready.go:86] duration metric: took 4.236076ms for pod "kube-apiserver-default-k8s-diff-port-414413" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:54.149098  284412 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-414413" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:54.521142  284412 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-414413" is "Ready"
	I1217 00:42:54.521239  284412 pod_ready.go:86] duration metric: took 372.112472ms for pod "kube-controller-manager-default-k8s-diff-port-414413" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:54.722142  284412 pod_ready.go:83] waiting for pod "kube-proxy-prlkw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:55.121661  284412 pod_ready.go:94] pod "kube-proxy-prlkw" is "Ready"
	I1217 00:42:55.121687  284412 pod_ready.go:86] duration metric: took 399.517411ms for pod "kube-proxy-prlkw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:55.321591  284412 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-414413" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:55.720954  284412 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-414413" is "Ready"
	I1217 00:42:55.720984  284412 pod_ready.go:86] duration metric: took 399.365325ms for pod "kube-scheduler-default-k8s-diff-port-414413" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:55.721018  284412 pod_ready.go:40] duration metric: took 1.605397099s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:42:55.763713  284412 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1217 00:42:55.765254  284412 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-414413" cluster and "default" namespace by default
	I1217 00:42:55.818485  292081 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501663396s
	I1217 00:42:55.839463  292081 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 00:42:55.850270  292081 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 00:42:55.859337  292081 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 00:42:55.859656  292081 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-653717 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 00:42:55.869331  292081 kubeadm.go:319] [bootstrap-token] Using token: xq2phg.ktr5edtc91gmhzse
	I1217 00:42:55.870386  292081 out.go:252]   - Configuring RBAC rules ...
	I1217 00:42:55.870484  292081 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 00:42:55.875045  292081 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 00:42:55.880634  292081 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 00:42:55.883504  292081 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 00:42:55.886341  292081 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 00:42:55.890047  292081 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 00:42:56.226107  292081 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 00:42:56.640330  292081 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 00:42:57.225156  292081 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 00:42:57.226195  292081 kubeadm.go:319] 
	I1217 00:42:57.226269  292081 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 00:42:57.226277  292081 kubeadm.go:319] 
	I1217 00:42:57.226356  292081 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 00:42:57.226362  292081 kubeadm.go:319] 
	I1217 00:42:57.226384  292081 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 00:42:57.226480  292081 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 00:42:57.226598  292081 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 00:42:57.226616  292081 kubeadm.go:319] 
	I1217 00:42:57.226703  292081 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 00:42:57.226717  292081 kubeadm.go:319] 
	I1217 00:42:57.226785  292081 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 00:42:57.226798  292081 kubeadm.go:319] 
	I1217 00:42:57.226881  292081 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 00:42:57.227022  292081 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 00:42:57.227145  292081 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 00:42:57.227178  292081 kubeadm.go:319] 
	I1217 00:42:57.227302  292081 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 00:42:57.227423  292081 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 00:42:57.227438  292081 kubeadm.go:319] 
	I1217 00:42:57.227564  292081 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token xq2phg.ktr5edtc91gmhzse \
	I1217 00:42:57.227719  292081 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a7c34974519aee4953e03245da076d7a2eba06e40135880a85806e2dab303fa1 \
	I1217 00:42:57.227749  292081 kubeadm.go:319] 	--control-plane 
	I1217 00:42:57.227759  292081 kubeadm.go:319] 
	I1217 00:42:57.227850  292081 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 00:42:57.227860  292081 kubeadm.go:319] 
	I1217 00:42:57.227932  292081 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xq2phg.ktr5edtc91gmhzse \
	I1217 00:42:57.228080  292081 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a7c34974519aee4953e03245da076d7a2eba06e40135880a85806e2dab303fa1 
	I1217 00:42:57.230536  292081 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 00:42:57.230681  292081 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:42:57.230702  292081 cni.go:84] Creating CNI manager for ""
	I1217 00:42:57.230709  292081 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:42:57.232260  292081 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 00:42:57.233377  292081 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 00:42:57.237982  292081 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1217 00:42:57.238010  292081 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1217 00:42:57.251056  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 00:42:57.457147  292081 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 00:42:57.457215  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:57.457244  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-653717 minikube.k8s.io/updated_at=2025_12_17T00_42_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1 minikube.k8s.io/name=newest-cni-653717 minikube.k8s.io/primary=true
	I1217 00:42:57.466893  292081 ops.go:34] apiserver oom_adj: -16
	I1217 00:42:57.539907  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:58.040174  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:58.540174  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:59.040156  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:59.541018  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1217 00:42:56.803720  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	W1217 00:42:59.302506  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	I1217 00:43:00.040549  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:00.540155  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:01.040145  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:01.540016  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:02.040137  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:02.118174  292081 kubeadm.go:1114] duration metric: took 4.661014005s to wait for elevateKubeSystemPrivileges
	I1217 00:43:02.118212  292081 kubeadm.go:403] duration metric: took 13.3809193s to StartCluster
	I1217 00:43:02.118233  292081 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:02.118312  292081 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:43:02.120829  292081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:02.121226  292081 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 00:43:02.121245  292081 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:43:02.121324  292081 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:43:02.121418  292081 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-653717"
	I1217 00:43:02.121438  292081 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-653717"
	I1217 00:43:02.121436  292081 config.go:182] Loaded profile config "newest-cni-653717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:43:02.121469  292081 host.go:66] Checking if "newest-cni-653717" exists ...
	I1217 00:43:02.121488  292081 addons.go:70] Setting default-storageclass=true in profile "newest-cni-653717"
	I1217 00:43:02.121503  292081 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-653717"
	I1217 00:43:02.121916  292081 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:43:02.122170  292081 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:43:02.122912  292081 out.go:179] * Verifying Kubernetes components...
	I1217 00:43:02.124175  292081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:02.146305  292081 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:43:02.147710  292081 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:43:02.147731  292081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:43:02.147787  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:02.148293  292081 addons.go:239] Setting addon default-storageclass=true in "newest-cni-653717"
	I1217 00:43:02.148333  292081 host.go:66] Checking if "newest-cni-653717" exists ...
	I1217 00:43:02.148901  292081 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:43:02.179054  292081 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:43:02.179080  292081 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:43:02.179142  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:02.182767  292081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:43:02.204665  292081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:43:02.226181  292081 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 00:43:02.291907  292081 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:43:02.303842  292081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:43:02.313910  292081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:43:02.415698  292081 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1217 00:43:02.417709  292081 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:43:02.417770  292081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:02.595714  292081 api_server.go:72] duration metric: took 474.434249ms to wait for apiserver process to appear ...
	I1217 00:43:02.595741  292081 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:43:02.595765  292081 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 00:43:02.601058  292081 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1217 00:43:02.601705  292081 api_server.go:141] control plane version: v1.35.0-beta.0
	I1217 00:43:02.601724  292081 api_server.go:131] duration metric: took 5.976027ms to wait for apiserver health ...
	I1217 00:43:02.601732  292081 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:43:02.602188  292081 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 00:43:02.603119  292081 addons.go:530] duration metric: took 481.791707ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 00:43:02.603985  292081 system_pods.go:59] 8 kube-system pods found
	I1217 00:43:02.604034  292081 system_pods.go:61] "coredns-7d764666f9-djwjl" [741342b4-626d-4282-ba19-0e8b37eb2556] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 00:43:02.604041  292081 system_pods.go:61] "etcd-newest-cni-653717" [8210d4d5-f66f-43fe-b160-e85265f0dcd0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 00:43:02.604047  292081 system_pods.go:61] "kindnet-xmw8c" [7688d3d1-e8d9-4b27-bd63-412f8972c114] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 00:43:02.604053  292081 system_pods.go:61] "kube-apiserver-newest-cni-653717" [2a8f1a0d-5c29-49c7-b857-e82bc22e048f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 00:43:02.604058  292081 system_pods.go:61] "kube-controller-manager-newest-cni-653717" [d368a2d6-d0bf-4119-982a-d08d313d1433] Running
	I1217 00:43:02.604063  292081 system_pods.go:61] "kube-proxy-9jd8t" [e7d2bcca-b703-4fd2-9af0-c08825a47e85] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 00:43:02.604067  292081 system_pods.go:61] "kube-scheduler-newest-cni-653717" [f17c94c7-8363-4f0d-a31c-6db9a2b0f14c] Running
	I1217 00:43:02.604071  292081 system_pods.go:61] "storage-provisioner" [e5c636ed-8536-4f92-8033-757cda2e5a8e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 00:43:02.604076  292081 system_pods.go:74] duration metric: took 2.338809ms to wait for pod list to return data ...
	I1217 00:43:02.604084  292081 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:43:02.605876  292081 default_sa.go:45] found service account: "default"
	I1217 00:43:02.605895  292081 default_sa.go:55] duration metric: took 1.805946ms for default service account to be created ...
	I1217 00:43:02.605903  292081 kubeadm.go:587] duration metric: took 484.627665ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 00:43:02.605916  292081 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:43:02.607656  292081 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 00:43:02.607674  292081 node_conditions.go:123] node cpu capacity is 8
	I1217 00:43:02.607685  292081 node_conditions.go:105] duration metric: took 1.765023ms to run NodePressure ...
	I1217 00:43:02.607695  292081 start.go:242] waiting for startup goroutines ...
	I1217 00:43:02.920711  292081 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-653717" context rescaled to 1 replicas
	I1217 00:43:02.920758  292081 start.go:247] waiting for cluster config update ...
	I1217 00:43:02.920775  292081 start.go:256] writing updated cluster config ...
	I1217 00:43:02.921106  292081 ssh_runner.go:195] Run: rm -f paused
	I1217 00:43:02.971248  292081 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1217 00:43:02.973174  292081 out.go:179] * Done! kubectl is now configured to use "newest-cni-653717" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.643526496Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.647209914Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=535c2268-5c70-4003-9bcb-3408758b5122 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.647960637Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a5777142-c739-4eac-8ce5-ee148f06e000 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.649144523Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.64968888Z" level=info msg="Ran pod sandbox da59df277ced325a477ec5168ac61eb0396b029440a66c5abc0e40a8628aea55 with infra container: kube-system/kube-proxy-9jd8t/POD" id=535c2268-5c70-4003-9bcb-3408758b5122 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.649746031Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.650616324Z" level=info msg="Ran pod sandbox 47c442a4a462349b1ff549ccaf02c653220cab28597dd979ccc4f470e0295f5b with infra container: kube-system/kindnet-xmw8c/POD" id=a5777142-c739-4eac-8ce5-ee148f06e000 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.650860374Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=a04755b6-ef6f-4ce5-b1d6-ca08b07c50dc name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.651545244Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=daf73c39-ffaf-4355-8a11-ad476e90239f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.651804718Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=bce8d5e3-b8c5-43cc-a168-204b18f8d2b9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.65241897Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=14381a43-efad-4b95-84a1-418d6aa58887 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.655247466Z" level=info msg="Creating container: kube-system/kube-proxy-9jd8t/kube-proxy" id=61f940c0-0dce-4936-90f0-1ab951ef1862 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.655352923Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.656861642Z" level=info msg="Creating container: kube-system/kindnet-xmw8c/kindnet-cni" id=0ecc9960-3f9e-4cb1-9807-ec1fbdd7ac16 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.656961855Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.659793357Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.660199508Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.661163082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.661566388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.691852756Z" level=info msg="Created container f8785afe6d6a0b8f5c9c615da7df0e02f3032328cce06ba899d9c6f736f56bde: kube-system/kindnet-xmw8c/kindnet-cni" id=0ecc9960-3f9e-4cb1-9807-ec1fbdd7ac16 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.692499328Z" level=info msg="Starting container: f8785afe6d6a0b8f5c9c615da7df0e02f3032328cce06ba899d9c6f736f56bde" id=4f2c950e-34fe-4c85-897c-f9370aae8ef9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.693780215Z" level=info msg="Created container acfde35a8772238d56ef70b932a76a78e936c6dca8f52d501718d94d1da1019c: kube-system/kube-proxy-9jd8t/kube-proxy" id=61f940c0-0dce-4936-90f0-1ab951ef1862 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.694164604Z" level=info msg="Started container" PID=1592 containerID=f8785afe6d6a0b8f5c9c615da7df0e02f3032328cce06ba899d9c6f736f56bde description=kube-system/kindnet-xmw8c/kindnet-cni id=4f2c950e-34fe-4c85-897c-f9370aae8ef9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=47c442a4a462349b1ff549ccaf02c653220cab28597dd979ccc4f470e0295f5b
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.694351775Z" level=info msg="Starting container: acfde35a8772238d56ef70b932a76a78e936c6dca8f52d501718d94d1da1019c" id=cae09e41-dc64-4f6e-9a88-9fd63b7125f5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:43:02 newest-cni-653717 crio[773]: time="2025-12-17T00:43:02.697599541Z" level=info msg="Started container" PID=1593 containerID=acfde35a8772238d56ef70b932a76a78e936c6dca8f52d501718d94d1da1019c description=kube-system/kube-proxy-9jd8t/kube-proxy id=cae09e41-dc64-4f6e-9a88-9fd63b7125f5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=da59df277ced325a477ec5168ac61eb0396b029440a66c5abc0e40a8628aea55
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f8785afe6d6a0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   47c442a4a4623       kindnet-xmw8c                               kube-system
	acfde35a87722       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   1 second ago        Running             kube-proxy                0                   da59df277ced3       kube-proxy-9jd8t                            kube-system
	749f646fe9922       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   11 seconds ago      Running             kube-controller-manager   0                   98935d6e6f477       kube-controller-manager-newest-cni-653717   kube-system
	06e6e7e9914d4       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   11 seconds ago      Running             kube-apiserver            0                   1724397b177c7       kube-apiserver-newest-cni-653717            kube-system
	5a07f3bf31d27       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   11 seconds ago      Running             kube-scheduler            0                   2d815ca239a84       kube-scheduler-newest-cni-653717            kube-system
	35eb082cc70dc       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   11 seconds ago      Running             etcd                      0                   57b9a3a9656f4       etcd-newest-cni-653717                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-653717
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-653717
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=newest-cni-653717
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_42_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:42:54 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-653717
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:42:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:42:56 +0000   Wed, 17 Dec 2025 00:42:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:42:56 +0000   Wed, 17 Dec 2025 00:42:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:42:56 +0000   Wed, 17 Dec 2025 00:42:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 17 Dec 2025 00:42:56 +0000   Wed, 17 Dec 2025 00:42:52 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-653717
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                50a52395-faf4-409e-a6ea-aa486ab479f3
	  Boot ID:                    0e9cedc6-c46e-4354-b3d2-9272a8b33ae5
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-653717                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-xmw8c                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-653717             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-653717    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-9jd8t                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-653717             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-653717 event: Registered Node newest-cni-653717 in Controller
	
	
	==> dmesg <==
	[  +0.089382] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024236] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.864694] kauditd_printk_skb: 47 callbacks suppressed
	[Dec17 00:07] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.006904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +2.048755] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +4.030595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +8.447143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[ +16.382404] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000015] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[Dec17 00:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	
	
	==> etcd [35eb082cc70dcd3e3aeb6403112a19663db0f39ee0c40872f568e6f4c143d165] <==
	{"level":"warn","ts":"2025-12-17T00:42:53.373756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.382435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.391179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.402217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.410521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.419080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.427101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.444466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.460982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.474235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.480262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.490170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.499253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.510541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.523091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.532182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.541102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.549967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.558258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.566393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.588500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.596728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.607448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.619609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:53.673748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50264","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:43:04 up  1:25,  0 user,  load average: 3.77, 2.88, 1.94
	Linux newest-cni-653717 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f8785afe6d6a0b8f5c9c615da7df0e02f3032328cce06ba899d9c6f736f56bde] <==
	I1217 00:43:02.939158       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 00:43:02.939443       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1217 00:43:02.939570       1 main.go:148] setting mtu 1500 for CNI 
	I1217 00:43:02.939586       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 00:43:02.939608       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T00:43:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 00:43:03.142665       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 00:43:03.142702       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 00:43:03.142716       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 00:43:03.142872       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 00:43:03.487361       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 00:43:03.487411       1 metrics.go:72] Registering metrics
	I1217 00:43:03.487493       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [06e6e7e9914d44d4a42f687f2be58ee39e48c551f8688da799a2c67375e97343] <==
	I1217 00:42:54.280969       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 00:42:54.281059       1 default_servicecidr_controller.go:169] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1217 00:42:54.281389       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 00:42:54.285400       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:42:54.286213       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1217 00:42:54.289690       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 00:42:54.291245       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:42:54.323481       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 00:42:55.184671       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1217 00:42:55.188609       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1217 00:42:55.188628       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 00:42:55.624968       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 00:42:55.659912       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 00:42:55.792050       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 00:42:55.804308       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1217 00:42:55.805808       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 00:42:55.811600       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 00:42:56.221983       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 00:42:56.630344       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 00:42:56.639553       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 00:42:56.646698       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 00:43:01.722703       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1217 00:43:01.976104       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:43:01.979208       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:43:02.025395       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [749f646fe992252be4864bd2b685c172e1571f7ca3000a086feab12a1372b577] <==
	I1217 00:43:01.036349       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:01.036356       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:01.036362       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:01.036368       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:01.040984       1 range_allocator.go:177] "Sending events to api server"
	I1217 00:43:01.041091       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1217 00:43:01.041134       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:43:01.041160       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:01.037106       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:01.037229       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:01.037245       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:01.037701       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:01.037676       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:01.038671       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:01.038702       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:01.037708       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:01.037689       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:01.038492       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:01.038647       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:01.043442       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:01.050338       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-653717" podCIDRs=["10.42.0.0/24"]
	I1217 00:43:01.135892       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:01.138110       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:01.138130       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 00:43:01.138138       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [acfde35a8772238d56ef70b932a76a78e936c6dca8f52d501718d94d1da1019c] <==
	I1217 00:43:02.732001       1 server_linux.go:53] "Using iptables proxy"
	I1217 00:43:02.798815       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:43:02.899749       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:02.899787       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1217 00:43:02.899898       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 00:43:02.918307       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 00:43:02.918362       1 server_linux.go:136] "Using iptables Proxier"
	I1217 00:43:02.924443       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 00:43:02.924955       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1217 00:43:02.925045       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:43:02.926769       1 config.go:200] "Starting service config controller"
	I1217 00:43:02.926981       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 00:43:02.926812       1 config.go:309] "Starting node config controller"
	I1217 00:43:02.927029       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 00:43:02.927035       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 00:43:02.926928       1 config.go:106] "Starting endpoint slice config controller"
	I1217 00:43:02.927042       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 00:43:02.926914       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 00:43:02.927056       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 00:43:03.027682       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 00:43:03.027710       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 00:43:03.027691       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [5a07f3bf31d2790072c2b33b927502ceedcb024fd2a9d3288be675603636e917] <==
	E1217 00:42:55.065192       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1217 00:42:55.066154       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1217 00:42:55.095372       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1217 00:42:55.096392       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1217 00:42:55.121616       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1217 00:42:55.122523       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1217 00:42:55.220930       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1217 00:42:55.221862       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1217 00:42:55.221941       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1217 00:42:55.222854       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1217 00:42:55.340956       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1217 00:42:55.341889       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1217 00:42:55.350121       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1217 00:42:55.350975       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1217 00:42:55.355811       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1217 00:42:55.356773       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1217 00:42:55.376945       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1217 00:42:55.377924       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1217 00:42:55.445508       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1217 00:42:55.446543       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1217 00:42:55.447390       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1217 00:42:55.448374       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1217 00:42:55.464552       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1217 00:42:55.465365       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	I1217 00:42:58.243724       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 00:42:58 newest-cni-653717 kubelet[1300]: E1217 00:42:58.492457    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-653717" containerName="etcd"
	Dec 17 00:42:58 newest-cni-653717 kubelet[1300]: E1217 00:42:58.492697    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-653717" containerName="kube-apiserver"
	Dec 17 00:42:58 newest-cni-653717 kubelet[1300]: E1217 00:42:58.492889    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-653717" containerName="kube-scheduler"
	Dec 17 00:42:58 newest-cni-653717 kubelet[1300]: I1217 00:42:58.506662    1300 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-653717" podStartSLOduration=2.506647357 podStartE2EDuration="2.506647357s" podCreationTimestamp="2025-12-17 00:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:42:57.540166399 +0000 UTC m=+1.146468528" watchObservedRunningTime="2025-12-17 00:42:58.506647357 +0000 UTC m=+2.112949494"
	Dec 17 00:42:59 newest-cni-653717 kubelet[1300]: E1217 00:42:59.494483    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-653717" containerName="etcd"
	Dec 17 00:42:59 newest-cni-653717 kubelet[1300]: E1217 00:42:59.494663    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-653717" containerName="kube-scheduler"
	Dec 17 00:42:59 newest-cni-653717 kubelet[1300]: E1217 00:42:59.494687    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-653717" containerName="kube-apiserver"
	Dec 17 00:43:01 newest-cni-653717 kubelet[1300]: I1217 00:43:01.089573    1300 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 17 00:43:01 newest-cni-653717 kubelet[1300]: I1217 00:43:01.090252    1300 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 17 00:43:01 newest-cni-653717 kubelet[1300]: I1217 00:43:01.793069    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7688d3d1-e8d9-4b27-bd63-412f8972c114-lib-modules\") pod \"kindnet-xmw8c\" (UID: \"7688d3d1-e8d9-4b27-bd63-412f8972c114\") " pod="kube-system/kindnet-xmw8c"
	Dec 17 00:43:01 newest-cni-653717 kubelet[1300]: I1217 00:43:01.793131    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7d2bcca-b703-4fd2-9af0-c08825a47e85-xtables-lock\") pod \"kube-proxy-9jd8t\" (UID: \"e7d2bcca-b703-4fd2-9af0-c08825a47e85\") " pod="kube-system/kube-proxy-9jd8t"
	Dec 17 00:43:01 newest-cni-653717 kubelet[1300]: I1217 00:43:01.793160    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e7d2bcca-b703-4fd2-9af0-c08825a47e85-kube-proxy\") pod \"kube-proxy-9jd8t\" (UID: \"e7d2bcca-b703-4fd2-9af0-c08825a47e85\") " pod="kube-system/kube-proxy-9jd8t"
	Dec 17 00:43:01 newest-cni-653717 kubelet[1300]: I1217 00:43:01.793258    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7d2bcca-b703-4fd2-9af0-c08825a47e85-lib-modules\") pod \"kube-proxy-9jd8t\" (UID: \"e7d2bcca-b703-4fd2-9af0-c08825a47e85\") " pod="kube-system/kube-proxy-9jd8t"
	Dec 17 00:43:01 newest-cni-653717 kubelet[1300]: I1217 00:43:01.793282    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7688d3d1-e8d9-4b27-bd63-412f8972c114-xtables-lock\") pod \"kindnet-xmw8c\" (UID: \"7688d3d1-e8d9-4b27-bd63-412f8972c114\") " pod="kube-system/kindnet-xmw8c"
	Dec 17 00:43:01 newest-cni-653717 kubelet[1300]: I1217 00:43:01.793305    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8p5t\" (UniqueName: \"kubernetes.io/projected/e7d2bcca-b703-4fd2-9af0-c08825a47e85-kube-api-access-d8p5t\") pod \"kube-proxy-9jd8t\" (UID: \"e7d2bcca-b703-4fd2-9af0-c08825a47e85\") " pod="kube-system/kube-proxy-9jd8t"
	Dec 17 00:43:01 newest-cni-653717 kubelet[1300]: I1217 00:43:01.793332    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7688d3d1-e8d9-4b27-bd63-412f8972c114-cni-cfg\") pod \"kindnet-xmw8c\" (UID: \"7688d3d1-e8d9-4b27-bd63-412f8972c114\") " pod="kube-system/kindnet-xmw8c"
	Dec 17 00:43:01 newest-cni-653717 kubelet[1300]: I1217 00:43:01.793360    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn9lr\" (UniqueName: \"kubernetes.io/projected/7688d3d1-e8d9-4b27-bd63-412f8972c114-kube-api-access-wn9lr\") pod \"kindnet-xmw8c\" (UID: \"7688d3d1-e8d9-4b27-bd63-412f8972c114\") " pod="kube-system/kindnet-xmw8c"
	Dec 17 00:43:01 newest-cni-653717 kubelet[1300]: E1217 00:43:01.900638    1300 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 17 00:43:01 newest-cni-653717 kubelet[1300]: E1217 00:43:01.900678    1300 projected.go:196] Error preparing data for projected volume kube-api-access-d8p5t for pod kube-system/kube-proxy-9jd8t: configmap "kube-root-ca.crt" not found
	Dec 17 00:43:01 newest-cni-653717 kubelet[1300]: E1217 00:43:01.900641    1300 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 17 00:43:01 newest-cni-653717 kubelet[1300]: E1217 00:43:01.900760    1300 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7d2bcca-b703-4fd2-9af0-c08825a47e85-kube-api-access-d8p5t podName:e7d2bcca-b703-4fd2-9af0-c08825a47e85 nodeName:}" failed. No retries permitted until 2025-12-17 00:43:02.400732168 +0000 UTC m=+6.007034298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d8p5t" (UniqueName: "kubernetes.io/projected/e7d2bcca-b703-4fd2-9af0-c08825a47e85-kube-api-access-d8p5t") pod "kube-proxy-9jd8t" (UID: "e7d2bcca-b703-4fd2-9af0-c08825a47e85") : configmap "kube-root-ca.crt" not found
	Dec 17 00:43:01 newest-cni-653717 kubelet[1300]: E1217 00:43:01.900774    1300 projected.go:196] Error preparing data for projected volume kube-api-access-wn9lr for pod kube-system/kindnet-xmw8c: configmap "kube-root-ca.crt" not found
	Dec 17 00:43:01 newest-cni-653717 kubelet[1300]: E1217 00:43:01.900854    1300 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7688d3d1-e8d9-4b27-bd63-412f8972c114-kube-api-access-wn9lr podName:7688d3d1-e8d9-4b27-bd63-412f8972c114 nodeName:}" failed. No retries permitted until 2025-12-17 00:43:02.400825183 +0000 UTC m=+6.007127305 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wn9lr" (UniqueName: "kubernetes.io/projected/7688d3d1-e8d9-4b27-bd63-412f8972c114-kube-api-access-wn9lr") pod "kindnet-xmw8c" (UID: "7688d3d1-e8d9-4b27-bd63-412f8972c114") : configmap "kube-root-ca.crt" not found
	Dec 17 00:43:03 newest-cni-653717 kubelet[1300]: I1217 00:43:03.515941    1300 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-xmw8c" podStartSLOduration=2.515921459 podStartE2EDuration="2.515921459s" podCreationTimestamp="2025-12-17 00:43:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:43:03.515635781 +0000 UTC m=+7.121937909" watchObservedRunningTime="2025-12-17 00:43:03.515921459 +0000 UTC m=+7.122223591"
	Dec 17 00:43:03 newest-cni-653717 kubelet[1300]: I1217 00:43:03.525107    1300 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-9jd8t" podStartSLOduration=2.52508933 podStartE2EDuration="2.52508933s" podCreationTimestamp="2025-12-17 00:43:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:43:03.5249693 +0000 UTC m=+7.131271429" watchObservedRunningTime="2025-12-17 00:43:03.52508933 +0000 UTC m=+7.131391459"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-653717 -n newest-cni-653717
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-653717 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-djwjl storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-653717 describe pod coredns-7d764666f9-djwjl storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-653717 describe pod coredns-7d764666f9-djwjl storage-provisioner: exit status 1 (67.270954ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-djwjl" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-653717 describe pod coredns-7d764666f9-djwjl storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-414413 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-414413 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (249.546331ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:43:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-414413 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-414413 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-414413 describe deploy/metrics-server -n kube-system: exit status 1 (63.640547ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-414413 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-414413
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-414413:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "32e520445c9ef469b69a7cfa94fa07b2c047bc072eab1f9bd789716ea62b2b17",
	        "Created": "2025-12-17T00:42:18.411894947Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 286066,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:42:18.448054152Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/32e520445c9ef469b69a7cfa94fa07b2c047bc072eab1f9bd789716ea62b2b17/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/32e520445c9ef469b69a7cfa94fa07b2c047bc072eab1f9bd789716ea62b2b17/hostname",
	        "HostsPath": "/var/lib/docker/containers/32e520445c9ef469b69a7cfa94fa07b2c047bc072eab1f9bd789716ea62b2b17/hosts",
	        "LogPath": "/var/lib/docker/containers/32e520445c9ef469b69a7cfa94fa07b2c047bc072eab1f9bd789716ea62b2b17/32e520445c9ef469b69a7cfa94fa07b2c047bc072eab1f9bd789716ea62b2b17-json.log",
	        "Name": "/default-k8s-diff-port-414413",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-414413:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-414413",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "32e520445c9ef469b69a7cfa94fa07b2c047bc072eab1f9bd789716ea62b2b17",
	                "LowerDir": "/var/lib/docker/overlay2/f63ae4354f75340680ea6735a9f2526da1a4c2e021a8a8e10a3b649ecbc014e0-init/diff:/var/lib/docker/overlay2/594b812fd6d8db89dab322ea9e00d43dd555e9709fb5e6953e3873cce717392c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f63ae4354f75340680ea6735a9f2526da1a4c2e021a8a8e10a3b649ecbc014e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f63ae4354f75340680ea6735a9f2526da1a4c2e021a8a8e10a3b649ecbc014e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f63ae4354f75340680ea6735a9f2526da1a4c2e021a8a8e10a3b649ecbc014e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-414413",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-414413/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-414413",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-414413",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-414413",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d19b322f036e8d36d3209d0c1cdc0a554d922bd40ead24e764f4547177534749",
	            "SandboxKey": "/var/run/docker/netns/d19b322f036e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-414413": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a57026acfc125e7890d2c5444987c6f9f2a024f5d99a4bf5d6821c92ba08cc07",
	                    "EndpointID": "89b60f7ca9d9298a703ee824a8eaa7bf047ea05b06147e2780778f4ced6817a6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "26:c2:6a:74:17:42",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-414413",
	                        "32e520445c9e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-414413 -n default-k8s-diff-port-414413
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-414413 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-414413 logs -n 25: (1.021396023s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p stopped-upgrade-028618                                                                                                                                                                                                                            │ stopped-upgrade-028618       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:41 UTC │
	│ start   │ -p no-preload-864613 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-742860 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:41 UTC │
	│ start   │ -p old-k8s-version-742860 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p cert-expiration-753607 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                            │ cert-expiration-753607       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:41 UTC │
	│ delete  │ -p cert-expiration-753607                                                                                                                                                                                                                            │ cert-expiration-753607       │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p embed-certs-153232 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-803959    │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ start   │ -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-803959    │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ delete  │ -p kubernetes-upgrade-803959                                                                                                                                                                                                                         │ kubernetes-upgrade-803959    │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ delete  │ -p disable-driver-mounts-827138                                                                                                                                                                                                                      │ disable-driver-mounts-827138 │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p default-k8s-diff-port-414413 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ addons  │ enable metrics-server -p no-preload-864613 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ stop    │ -p no-preload-864613 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ image   │ old-k8s-version-742860 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ pause   │ -p old-k8s-version-742860 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-864613 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p no-preload-864613 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ delete  │ -p old-k8s-version-742860                                                                                                                                                                                                                            │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ delete  │ -p old-k8s-version-742860                                                                                                                                                                                                                            │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p newest-cni-653717 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-153232 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ stop    │ -p embed-certs-153232 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-653717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-414413 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:42:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:42:39.551621  292081 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:42:39.551901  292081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:42:39.551911  292081 out.go:374] Setting ErrFile to fd 2...
	I1217 00:42:39.551915  292081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:42:39.552166  292081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:42:39.552662  292081 out.go:368] Setting JSON to false
	I1217 00:42:39.553726  292081 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5109,"bootTime":1765927050,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:42:39.553780  292081 start.go:143] virtualization: kvm guest
	I1217 00:42:39.555553  292081 out.go:179] * [newest-cni-653717] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:42:39.556746  292081 notify.go:221] Checking for updates...
	I1217 00:42:39.556769  292081 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:42:39.557949  292081 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:42:39.559133  292081 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:42:39.560242  292081 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:42:39.561274  292081 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:42:39.563103  292081 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:42:39.564577  292081 config.go:182] Loaded profile config "default-k8s-diff-port-414413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:42:39.564675  292081 config.go:182] Loaded profile config "embed-certs-153232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:42:39.564782  292081 config.go:182] Loaded profile config "no-preload-864613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:42:39.564899  292081 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:42:39.590554  292081 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:42:39.590699  292081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:42:39.656559  292081 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 00:42:39.646099494 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:42:39.656663  292081 docker.go:319] overlay module found
	I1217 00:42:39.659088  292081 out.go:179] * Using the docker driver based on user configuration
	I1217 00:42:39.660142  292081 start.go:309] selected driver: docker
	I1217 00:42:39.660155  292081 start.go:927] validating driver "docker" against <nil>
	I1217 00:42:39.660166  292081 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:42:39.660774  292081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:42:39.722518  292081 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 00:42:39.711146936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:42:39.722723  292081 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 00:42:39.722757  292081 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 00:42:39.723072  292081 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 00:42:39.726391  292081 out.go:179] * Using Docker driver with root privileges
	I1217 00:42:39.727427  292081 cni.go:84] Creating CNI manager for ""
	I1217 00:42:39.727511  292081 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:42:39.727530  292081 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 00:42:39.727629  292081 start.go:353] cluster config:
	{Name:newest-cni-653717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-653717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:42:39.729007  292081 out.go:179] * Starting "newest-cni-653717" primary control-plane node in "newest-cni-653717" cluster
	I1217 00:42:39.729981  292081 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:42:39.731716  292081 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:42:39.732745  292081 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:42:39.732775  292081 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1217 00:42:39.732795  292081 cache.go:65] Caching tarball of preloaded images
	I1217 00:42:39.732856  292081 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:42:39.732901  292081 preload.go:238] Found /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 00:42:39.732916  292081 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1217 00:42:39.733047  292081 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/config.json ...
	I1217 00:42:39.733072  292081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/config.json: {Name:mkc027815a15326496ab2408383e384558a71cb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:39.754922  292081 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:42:39.754940  292081 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:42:39.754960  292081 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:42:39.755026  292081 start.go:360] acquireMachinesLock for newest-cni-653717: {Name:mk721025c3a21068c756325b281b92cea9d9d432 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:42:39.755136  292081 start.go:364] duration metric: took 91.503µs to acquireMachinesLock for "newest-cni-653717"
	I1217 00:42:39.755162  292081 start.go:93] Provisioning new machine with config: &{Name:newest-cni-653717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-653717 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:42:39.755339  292081 start.go:125] createHost starting for "" (driver="docker")
	I1217 00:42:34.990123  290128 out.go:252] * Restarting existing docker container for "no-preload-864613" ...
	I1217 00:42:34.990194  290128 cli_runner.go:164] Run: docker start no-preload-864613
	I1217 00:42:35.271109  290128 cli_runner.go:164] Run: docker container inspect no-preload-864613 --format={{.State.Status}}
	I1217 00:42:35.293226  290128 kic.go:430] container "no-preload-864613" state is running.
	I1217 00:42:35.293636  290128 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-864613
	I1217 00:42:35.318368  290128 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/config.json ...
	I1217 00:42:35.318647  290128 machine.go:94] provisionDockerMachine start ...
	I1217 00:42:35.318739  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:35.345305  290128 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:35.345550  290128 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1217 00:42:35.345563  290128 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:42:35.346319  290128 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48788->127.0.0.1:33083: read: connection reset by peer
	I1217 00:42:38.479735  290128 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-864613
	
	I1217 00:42:38.479761  290128 ubuntu.go:182] provisioning hostname "no-preload-864613"
	I1217 00:42:38.479822  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:38.498770  290128 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:38.499115  290128 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1217 00:42:38.499136  290128 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-864613 && echo "no-preload-864613" | sudo tee /etc/hostname
	I1217 00:42:38.637351  290128 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-864613
	
	I1217 00:42:38.637440  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:38.658230  290128 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:38.658487  290128 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1217 00:42:38.658515  290128 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-864613' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-864613/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-864613' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:42:38.788384  290128 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:42:38.788407  290128 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:42:38.788434  290128 ubuntu.go:190] setting up certificates
	I1217 00:42:38.788447  290128 provision.go:84] configureAuth start
	I1217 00:42:38.788515  290128 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-864613
	I1217 00:42:38.807940  290128 provision.go:143] copyHostCerts
	I1217 00:42:38.808027  290128 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:42:38.808047  290128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:42:38.808122  290128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:42:38.808261  290128 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:42:38.808286  290128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:42:38.808332  290128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:42:38.808431  290128 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:42:38.808442  290128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:42:38.808491  290128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:42:38.808580  290128 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.no-preload-864613 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-864613]
	I1217 00:42:38.892180  290128 provision.go:177] copyRemoteCerts
	I1217 00:42:38.892238  290128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:42:38.892281  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:38.911079  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:39.005310  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:42:39.023102  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:42:39.040760  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:42:39.058614  290128 provision.go:87] duration metric: took 270.146931ms to configureAuth
	I1217 00:42:39.058640  290128 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:42:39.058823  290128 config.go:182] Loaded profile config "no-preload-864613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:42:39.058943  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:39.077523  290128 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:39.077804  290128 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1217 00:42:39.077831  290128 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:42:39.439563  290128 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:42:39.439588  290128 machine.go:97] duration metric: took 4.120922822s to provisionDockerMachine
	I1217 00:42:39.439652  290128 start.go:293] postStartSetup for "no-preload-864613" (driver="docker")
	I1217 00:42:39.439674  290128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:42:39.439737  290128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:42:39.439779  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:39.458833  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:39.556864  290128 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:42:39.560960  290128 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:42:39.560985  290128 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:42:39.561021  290128 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:42:39.561074  290128 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:42:39.561192  290128 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:42:39.561332  290128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:42:39.569440  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:42:39.589195  290128 start.go:296] duration metric: took 149.524862ms for postStartSetup
	I1217 00:42:39.589264  290128 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:42:39.589306  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:39.611378  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:39.706544  290128 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:42:39.711286  290128 fix.go:56] duration metric: took 4.742462188s for fixHost
	I1217 00:42:39.711308  290128 start.go:83] releasing machines lock for "no-preload-864613", held for 4.742503801s
	I1217 00:42:39.711366  290128 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-864613
	I1217 00:42:39.731529  290128 ssh_runner.go:195] Run: cat /version.json
	I1217 00:42:39.731581  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:39.731644  290128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:42:39.731702  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:39.751519  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:39.752129  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:38.606421  284412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:39.107158  284412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:39.606578  284412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:40.107068  284412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:40.607475  284412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:41.106671  284412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:41.191185  284412 kubeadm.go:1114] duration metric: took 4.657497583s to wait for elevateKubeSystemPrivileges
	I1217 00:42:41.191228  284412 kubeadm.go:403] duration metric: took 15.676954898s to StartCluster
	I1217 00:42:41.191250  284412 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:41.191326  284412 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:42:41.193393  284412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:41.193647  284412 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:42:41.193813  284412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 00:42:41.193845  284412 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:42:41.193954  284412 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-414413"
	I1217 00:42:41.193968  284412 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-414413"
	I1217 00:42:41.193986  284412 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-414413"
	I1217 00:42:41.194024  284412 config.go:182] Loaded profile config "default-k8s-diff-port-414413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:42:41.194065  284412 host.go:66] Checking if "default-k8s-diff-port-414413" exists ...
	I1217 00:42:41.193999  284412 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-414413"
	I1217 00:42:41.194464  284412 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Status}}
	I1217 00:42:41.194643  284412 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Status}}
	I1217 00:42:41.199234  284412 out.go:179] * Verifying Kubernetes components...
	I1217 00:42:41.200842  284412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:41.223659  284412 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:42:39.842716  290128 ssh_runner.go:195] Run: systemctl --version
	I1217 00:42:39.901651  290128 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:42:39.939255  290128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:42:39.944540  290128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:42:39.944621  290128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:42:39.953110  290128 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:42:39.953139  290128 start.go:496] detecting cgroup driver to use...
	I1217 00:42:39.953172  290128 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:42:39.953213  290128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:42:39.969396  290128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:42:39.983260  290128 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:42:39.983311  290128 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:42:40.004183  290128 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:42:40.024650  290128 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:42:40.129134  290128 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:42:40.230889  290128 docker.go:234] disabling docker service ...
	I1217 00:42:40.230963  290128 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:42:40.249697  290128 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:42:40.263284  290128 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:42:40.366858  290128 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:42:40.456091  290128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:42:40.475522  290128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:42:40.491549  290128 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:42:40.491607  290128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:40.501573  290128 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:42:40.501637  290128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:40.512350  290128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:40.521014  290128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:40.529858  290128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:42:40.539315  290128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:40.548317  290128 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:40.556545  290128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:40.565397  290128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:42:40.572624  290128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:42:40.580318  290128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:40.683030  290128 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:42:41.204474  290128 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:42:41.204534  290128 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:42:41.212044  290128 start.go:564] Will wait 60s for crictl version
	I1217 00:42:41.212169  290128 ssh_runner.go:195] Run: which crictl
	I1217 00:42:41.218032  290128 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:42:41.259772  290128 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:42:41.259870  290128 ssh_runner.go:195] Run: crio --version
	I1217 00:42:41.301952  290128 ssh_runner.go:195] Run: crio --version
	I1217 00:42:41.350647  290128 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1217 00:42:41.224598  284412 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-414413"
	I1217 00:42:41.224648  284412 host.go:66] Checking if "default-k8s-diff-port-414413" exists ...
	I1217 00:42:41.224951  284412 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:42:41.224967  284412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:42:41.225071  284412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:42:41.225154  284412 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Status}}
	I1217 00:42:41.259312  284412 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:42:41.259335  284412 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:42:41.259392  284412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:42:41.263451  284412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:42:41.284213  284412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:42:41.318678  284412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 00:42:41.383381  284412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:42:41.407448  284412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:42:41.422819  284412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:42:41.574977  284412 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1217 00:42:41.576809  284412 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-414413" to be "Ready" ...
	I1217 00:42:41.847773  284412 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 00:42:41.351979  290128 cli_runner.go:164] Run: docker network inspect no-preload-864613 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:42:41.382246  290128 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 00:42:41.388749  290128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:42:41.407129  290128 kubeadm.go:884] updating cluster {Name:no-preload-864613 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-864613 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:42:41.407300  290128 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:42:41.407365  290128 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:42:41.460347  290128 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:42:41.461086  290128 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:42:41.461107  290128 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1217 00:42:41.461250  290128 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-864613 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-864613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:42:41.461345  290128 ssh_runner.go:195] Run: crio config
	I1217 00:42:41.540745  290128 cni.go:84] Creating CNI manager for ""
	I1217 00:42:41.540776  290128 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:42:41.540795  290128 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:42:41.540825  290128 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-864613 NodeName:no-preload-864613 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:42:41.541050  290128 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-864613"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:42:41.541130  290128 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:42:41.552225  290128 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:42:41.552302  290128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:42:41.563587  290128 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1217 00:42:41.582296  290128 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:42:41.601245  290128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1217 00:42:41.618842  290128 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:42:41.627097  290128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:42:41.640575  290128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:41.779494  290128 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:42:41.808680  290128 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613 for IP: 192.168.103.2
	I1217 00:42:41.808703  290128 certs.go:195] generating shared ca certs ...
	I1217 00:42:41.808722  290128 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:41.808901  290128 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:42:41.808964  290128 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:42:41.808977  290128 certs.go:257] generating profile certs ...
	I1217 00:42:41.809120  290128 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/client.key
	I1217 00:42:41.809192  290128 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/apiserver.key.74439f26
	I1217 00:42:41.809257  290128 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/proxy-client.key
	I1217 00:42:41.809398  290128 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:42:41.809440  290128 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:42:41.809456  290128 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:42:41.809498  290128 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:42:41.809536  290128 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:42:41.809574  290128 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:42:41.809636  290128 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:42:41.810241  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:42:41.835907  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:42:41.859930  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:42:41.882138  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:42:41.912524  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:42:41.936204  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 00:42:41.956213  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:42:41.975723  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/no-preload-864613/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:42:41.996532  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:42:42.014588  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:42:42.033145  290128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:42:42.050882  290128 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:42:42.063166  290128 ssh_runner.go:195] Run: openssl version
	I1217 00:42:42.069209  290128 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:42.078973  290128 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:42:42.087312  290128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:42.091173  290128 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:42.091229  290128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:42.127714  290128 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:42:42.135865  290128 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16354.pem
	I1217 00:42:42.144324  290128 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16354.pem /etc/ssl/certs/16354.pem
	I1217 00:42:42.151722  290128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16354.pem
	I1217 00:42:42.155308  290128 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:13 /usr/share/ca-certificates/16354.pem
	I1217 00:42:42.155360  290128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16354.pem
	I1217 00:42:42.194654  290128 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:42:42.204167  290128 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:42:42.212026  290128 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:42:42.219563  290128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:42:42.223628  290128 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:42:42.223682  290128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	I1217 00:42:42.276803  290128 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:42:42.285639  290128 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:42:42.290281  290128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:42:42.328285  290128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:42:42.379314  290128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:42:42.430181  290128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:42:42.491936  290128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:42:42.551969  290128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:42:42.595853  290128 kubeadm.go:401] StartCluster: {Name:no-preload-864613 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-864613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:42:42.595977  290128 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:42:42.596064  290128 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:42:42.632158  290128 cri.go:89] found id: "4b34ed74185a723d1987fd893c6b89aa61e85dd77a4391ea83bf44f5d07a0931"
	I1217 00:42:42.632183  290128 cri.go:89] found id: "a590d671bfa52ffb77f09298e606dd5a6cef506d25bf7c749bd516cf65fabaab"
	I1217 00:42:42.632191  290128 cri.go:89] found id: "a12cf220a059b218df62a14f9045f72149c1009f3507c8c36e206fdf43dc9d57"
	I1217 00:42:42.632202  290128 cri.go:89] found id: "d592a6ba05b7b5e2d53ffd9b29510a47348394c0b8faf29e99d49dce869dbeff"
	I1217 00:42:42.632208  290128 cri.go:89] found id: ""
	I1217 00:42:42.632258  290128 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 00:42:42.648079  290128 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:42:42Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:42:42.648152  290128 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:42:42.659743  290128 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:42:42.659831  290128 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:42:42.659957  290128 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:42:42.670583  290128 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:42:42.671843  290128 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-864613" does not appear in /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:42:42.672610  290128 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-12816/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-864613" cluster setting kubeconfig missing "no-preload-864613" context setting]
	I1217 00:42:42.673849  290128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:42.676258  290128 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:42:42.685491  290128 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1217 00:42:42.685528  290128 kubeadm.go:602] duration metric: took 25.615797ms to restartPrimaryControlPlane
	I1217 00:42:42.685540  290128 kubeadm.go:403] duration metric: took 89.695231ms to StartCluster
	I1217 00:42:42.685558  290128 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:42.685612  290128 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:42:42.687715  290128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:42.687977  290128 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:42:42.688235  290128 config.go:182] Loaded profile config "no-preload-864613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:42:42.688305  290128 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:42:42.688442  290128 addons.go:70] Setting storage-provisioner=true in profile "no-preload-864613"
	I1217 00:42:42.688465  290128 addons.go:239] Setting addon storage-provisioner=true in "no-preload-864613"
	I1217 00:42:42.688466  290128 addons.go:70] Setting dashboard=true in profile "no-preload-864613"
	W1217 00:42:42.688473  290128 addons.go:248] addon storage-provisioner should already be in state true
	I1217 00:42:42.688487  290128 addons.go:70] Setting default-storageclass=true in profile "no-preload-864613"
	I1217 00:42:42.688491  290128 addons.go:239] Setting addon dashboard=true in "no-preload-864613"
	W1217 00:42:42.688504  290128 addons.go:248] addon dashboard should already be in state true
	I1217 00:42:42.688508  290128 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-864613"
	I1217 00:42:42.688534  290128 host.go:66] Checking if "no-preload-864613" exists ...
	I1217 00:42:42.688565  290128 host.go:66] Checking if "no-preload-864613" exists ...
	I1217 00:42:42.688902  290128 cli_runner.go:164] Run: docker container inspect no-preload-864613 --format={{.State.Status}}
	I1217 00:42:42.689014  290128 cli_runner.go:164] Run: docker container inspect no-preload-864613 --format={{.State.Status}}
	I1217 00:42:42.689031  290128 cli_runner.go:164] Run: docker container inspect no-preload-864613 --format={{.State.Status}}
	I1217 00:42:42.696229  290128 out.go:179] * Verifying Kubernetes components...
	I1217 00:42:42.698403  290128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:42.723527  290128 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 00:42:42.724785  290128 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 00:42:42.725948  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 00:42:42.726058  290128 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 00:42:42.726130  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:42.727005  290128 addons.go:239] Setting addon default-storageclass=true in "no-preload-864613"
	W1217 00:42:42.727021  290128 addons.go:248] addon default-storageclass should already be in state true
	I1217 00:42:42.727055  290128 host.go:66] Checking if "no-preload-864613" exists ...
	I1217 00:42:42.727489  290128 cli_runner.go:164] Run: docker container inspect no-preload-864613 --format={{.State.Status}}
	I1217 00:42:42.730761  290128 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1217 00:42:39.742533  280822 node_ready.go:57] node "embed-certs-153232" has "Ready":"False" status (will retry)
	I1217 00:42:41.749292  280822 node_ready.go:49] node "embed-certs-153232" is "Ready"
	I1217 00:42:41.749331  280822 node_ready.go:38] duration metric: took 11.010585734s for node "embed-certs-153232" to be "Ready" ...
	I1217 00:42:41.749349  280822 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:42:41.749405  280822 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:41.774188  280822 api_server.go:72] duration metric: took 11.489358576s to wait for apiserver process to appear ...
	I1217 00:42:41.774225  280822 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:42:41.774250  280822 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 00:42:41.783349  280822 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 00:42:41.784553  280822 api_server.go:141] control plane version: v1.34.2
	I1217 00:42:41.784584  280822 api_server.go:131] duration metric: took 10.351149ms to wait for apiserver health ...
	I1217 00:42:41.784596  280822 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:42:41.788662  280822 system_pods.go:59] 8 kube-system pods found
	I1217 00:42:41.788701  280822 system_pods.go:61] "coredns-66bc5c9577-vtspd" [aedf434b-e03e-479c-a8f2-199e28231d61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:41.788711  280822 system_pods.go:61] "etcd-embed-certs-153232" [68a7a631-c79e-48d1-bd8d-1aafc2b61fcc] Running
	I1217 00:42:41.788718  280822 system_pods.go:61] "kindnet-zffzt" [f06f5d73-eef9-4876-b0aa-862d58c18777] Running
	I1217 00:42:41.788724  280822 system_pods.go:61] "kube-apiserver-embed-certs-153232" [a0a484be-31c5-4471-b35c-7d059d9e1b00] Running
	I1217 00:42:41.788736  280822 system_pods.go:61] "kube-controller-manager-embed-certs-153232" [6fd01afb-bd8e-450b-9082-310ff94c5958] Running
	I1217 00:42:41.788741  280822 system_pods.go:61] "kube-proxy-82b8k" [68026912-6bcc-4aee-b806-51f967dc200f] Running
	I1217 00:42:41.788746  280822 system_pods.go:61] "kube-scheduler-embed-certs-153232" [af854f70-8bef-44c5-ad64-197a3282d5c3] Running
	I1217 00:42:41.788794  280822 system_pods.go:61] "storage-provisioner" [ad4a1982-2da6-490d-bcba-f04782d2d9b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:41.788808  280822 system_pods.go:74] duration metric: took 4.204985ms to wait for pod list to return data ...
	I1217 00:42:41.788822  280822 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:42:41.793561  280822 default_sa.go:45] found service account: "default"
	I1217 00:42:41.793587  280822 default_sa.go:55] duration metric: took 4.758694ms for default service account to be created ...
	I1217 00:42:41.793600  280822 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 00:42:41.889984  280822 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:41.890037  280822 system_pods.go:89] "coredns-66bc5c9577-vtspd" [aedf434b-e03e-479c-a8f2-199e28231d61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:41.890046  280822 system_pods.go:89] "etcd-embed-certs-153232" [68a7a631-c79e-48d1-bd8d-1aafc2b61fcc] Running
	I1217 00:42:41.890056  280822 system_pods.go:89] "kindnet-zffzt" [f06f5d73-eef9-4876-b0aa-862d58c18777] Running
	I1217 00:42:41.890063  280822 system_pods.go:89] "kube-apiserver-embed-certs-153232" [a0a484be-31c5-4471-b35c-7d059d9e1b00] Running
	I1217 00:42:41.890073  280822 system_pods.go:89] "kube-controller-manager-embed-certs-153232" [6fd01afb-bd8e-450b-9082-310ff94c5958] Running
	I1217 00:42:41.890078  280822 system_pods.go:89] "kube-proxy-82b8k" [68026912-6bcc-4aee-b806-51f967dc200f] Running
	I1217 00:42:41.890085  280822 system_pods.go:89] "kube-scheduler-embed-certs-153232" [af854f70-8bef-44c5-ad64-197a3282d5c3] Running
	I1217 00:42:41.890095  280822 system_pods.go:89] "storage-provisioner" [ad4a1982-2da6-490d-bcba-f04782d2d9b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:41.890123  280822 retry.go:31] will retry after 248.746676ms: missing components: kube-dns
	I1217 00:42:42.142494  280822 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:42.142524  280822 system_pods.go:89] "coredns-66bc5c9577-vtspd" [aedf434b-e03e-479c-a8f2-199e28231d61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:42.142538  280822 system_pods.go:89] "etcd-embed-certs-153232" [68a7a631-c79e-48d1-bd8d-1aafc2b61fcc] Running
	I1217 00:42:42.142546  280822 system_pods.go:89] "kindnet-zffzt" [f06f5d73-eef9-4876-b0aa-862d58c18777] Running
	I1217 00:42:42.142550  280822 system_pods.go:89] "kube-apiserver-embed-certs-153232" [a0a484be-31c5-4471-b35c-7d059d9e1b00] Running
	I1217 00:42:42.142554  280822 system_pods.go:89] "kube-controller-manager-embed-certs-153232" [6fd01afb-bd8e-450b-9082-310ff94c5958] Running
	I1217 00:42:42.142557  280822 system_pods.go:89] "kube-proxy-82b8k" [68026912-6bcc-4aee-b806-51f967dc200f] Running
	I1217 00:42:42.142560  280822 system_pods.go:89] "kube-scheduler-embed-certs-153232" [af854f70-8bef-44c5-ad64-197a3282d5c3] Running
	I1217 00:42:42.142565  280822 system_pods.go:89] "storage-provisioner" [ad4a1982-2da6-490d-bcba-f04782d2d9b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:42.142577  280822 retry.go:31] will retry after 366.812444ms: missing components: kube-dns
	I1217 00:42:42.514253  280822 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:42.514281  280822 system_pods.go:89] "coredns-66bc5c9577-vtspd" [aedf434b-e03e-479c-a8f2-199e28231d61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:42.514287  280822 system_pods.go:89] "etcd-embed-certs-153232" [68a7a631-c79e-48d1-bd8d-1aafc2b61fcc] Running
	I1217 00:42:42.514293  280822 system_pods.go:89] "kindnet-zffzt" [f06f5d73-eef9-4876-b0aa-862d58c18777] Running
	I1217 00:42:42.514296  280822 system_pods.go:89] "kube-apiserver-embed-certs-153232" [a0a484be-31c5-4471-b35c-7d059d9e1b00] Running
	I1217 00:42:42.514300  280822 system_pods.go:89] "kube-controller-manager-embed-certs-153232" [6fd01afb-bd8e-450b-9082-310ff94c5958] Running
	I1217 00:42:42.514304  280822 system_pods.go:89] "kube-proxy-82b8k" [68026912-6bcc-4aee-b806-51f967dc200f] Running
	I1217 00:42:42.514307  280822 system_pods.go:89] "kube-scheduler-embed-certs-153232" [af854f70-8bef-44c5-ad64-197a3282d5c3] Running
	I1217 00:42:42.514312  280822 system_pods.go:89] "storage-provisioner" [ad4a1982-2da6-490d-bcba-f04782d2d9b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:42.514399  280822 retry.go:31] will retry after 333.656577ms: missing components: kube-dns
	I1217 00:42:42.853133  280822 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:42.853164  280822 system_pods.go:89] "coredns-66bc5c9577-vtspd" [aedf434b-e03e-479c-a8f2-199e28231d61] Running
	I1217 00:42:42.853172  280822 system_pods.go:89] "etcd-embed-certs-153232" [68a7a631-c79e-48d1-bd8d-1aafc2b61fcc] Running
	I1217 00:42:42.853177  280822 system_pods.go:89] "kindnet-zffzt" [f06f5d73-eef9-4876-b0aa-862d58c18777] Running
	I1217 00:42:42.853183  280822 system_pods.go:89] "kube-apiserver-embed-certs-153232" [a0a484be-31c5-4471-b35c-7d059d9e1b00] Running
	I1217 00:42:42.853190  280822 system_pods.go:89] "kube-controller-manager-embed-certs-153232" [6fd01afb-bd8e-450b-9082-310ff94c5958] Running
	I1217 00:42:42.853195  280822 system_pods.go:89] "kube-proxy-82b8k" [68026912-6bcc-4aee-b806-51f967dc200f] Running
	I1217 00:42:42.853200  280822 system_pods.go:89] "kube-scheduler-embed-certs-153232" [af854f70-8bef-44c5-ad64-197a3282d5c3] Running
	I1217 00:42:42.853205  280822 system_pods.go:89] "storage-provisioner" [ad4a1982-2da6-490d-bcba-f04782d2d9b8] Running
	I1217 00:42:42.853214  280822 system_pods.go:126] duration metric: took 1.059606129s to wait for k8s-apps to be running ...
	I1217 00:42:42.853227  280822 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 00:42:42.853279  280822 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:42:42.869286  280822 system_svc.go:56] duration metric: took 16.049777ms WaitForService to wait for kubelet
	I1217 00:42:42.869316  280822 kubeadm.go:587] duration metric: took 12.584493992s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:42:42.869340  280822 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:42:42.872567  280822 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 00:42:42.872595  280822 node_conditions.go:123] node cpu capacity is 8
	I1217 00:42:42.872609  280822 node_conditions.go:105] duration metric: took 3.264541ms to run NodePressure ...
	I1217 00:42:42.872621  280822 start.go:242] waiting for startup goroutines ...
	I1217 00:42:42.872628  280822 start.go:247] waiting for cluster config update ...
	I1217 00:42:42.872641  280822 start.go:256] writing updated cluster config ...
	I1217 00:42:42.872974  280822 ssh_runner.go:195] Run: rm -f paused
	I1217 00:42:42.877546  280822 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:42:42.881940  280822 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vtspd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:42.886950  280822 pod_ready.go:94] pod "coredns-66bc5c9577-vtspd" is "Ready"
	I1217 00:42:42.886970  280822 pod_ready.go:86] duration metric: took 4.999829ms for pod "coredns-66bc5c9577-vtspd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:42.889527  280822 pod_ready.go:83] waiting for pod "etcd-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:42.895847  280822 pod_ready.go:94] pod "etcd-embed-certs-153232" is "Ready"
	I1217 00:42:42.895869  280822 pod_ready.go:86] duration metric: took 6.325871ms for pod "etcd-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:42.898281  280822 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:42.902688  280822 pod_ready.go:94] pod "kube-apiserver-embed-certs-153232" is "Ready"
	I1217 00:42:42.902710  280822 pod_ready.go:86] duration metric: took 4.408331ms for pod "kube-apiserver-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:42.905039  280822 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:41.849058  284412 addons.go:530] duration metric: took 655.212128ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 00:42:42.080776  284412 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-414413" context rescaled to 1 replicas
	I1217 00:42:43.281597  280822 pod_ready.go:94] pod "kube-controller-manager-embed-certs-153232" is "Ready"
	I1217 00:42:43.281626  280822 pod_ready.go:86] duration metric: took 376.5674ms for pod "kube-controller-manager-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:43.484610  280822 pod_ready.go:83] waiting for pod "kube-proxy-82b8k" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:43.960602  280822 pod_ready.go:94] pod "kube-proxy-82b8k" is "Ready"
	I1217 00:42:43.960650  280822 pod_ready.go:86] duration metric: took 476.012578ms for pod "kube-proxy-82b8k" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:44.099686  280822 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:44.482807  280822 pod_ready.go:94] pod "kube-scheduler-embed-certs-153232" is "Ready"
	I1217 00:42:44.482862  280822 pod_ready.go:86] duration metric: took 383.141625ms for pod "kube-scheduler-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:44.482879  280822 pod_ready.go:40] duration metric: took 1.605302389s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:42:44.546591  280822 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1217 00:42:44.548075  280822 out.go:179] * Done! kubectl is now configured to use "embed-certs-153232" cluster and "default" namespace by default
	I1217 00:42:39.757771  292081 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 00:42:39.758042  292081 start.go:159] libmachine.API.Create for "newest-cni-653717" (driver="docker")
	I1217 00:42:39.758083  292081 client.go:173] LocalClient.Create starting
	I1217 00:42:39.758162  292081 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem
	I1217 00:42:39.758203  292081 main.go:143] libmachine: Decoding PEM data...
	I1217 00:42:39.758225  292081 main.go:143] libmachine: Parsing certificate...
	I1217 00:42:39.758288  292081 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem
	I1217 00:42:39.758312  292081 main.go:143] libmachine: Decoding PEM data...
	I1217 00:42:39.758329  292081 main.go:143] libmachine: Parsing certificate...
	I1217 00:42:39.758773  292081 cli_runner.go:164] Run: docker network inspect newest-cni-653717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 00:42:39.776750  292081 cli_runner.go:211] docker network inspect newest-cni-653717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 00:42:39.776824  292081 network_create.go:284] running [docker network inspect newest-cni-653717] to gather additional debugging logs...
	I1217 00:42:39.776846  292081 cli_runner.go:164] Run: docker network inspect newest-cni-653717
	W1217 00:42:39.795539  292081 cli_runner.go:211] docker network inspect newest-cni-653717 returned with exit code 1
	I1217 00:42:39.795568  292081 network_create.go:287] error running [docker network inspect newest-cni-653717]: docker network inspect newest-cni-653717: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-653717 not found
	I1217 00:42:39.795583  292081 network_create.go:289] output of [docker network inspect newest-cni-653717]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-653717 not found
	
	** /stderr **
	I1217 00:42:39.795681  292081 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:42:39.813581  292081 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ffd1d738f01 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3d:52:75:47:82} reservation:<nil>}
	I1217 00:42:39.814315  292081 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-280edd437675 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:ae:02:b5:f9:a6} reservation:<nil>}
	I1217 00:42:39.815124  292081 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9f28d049043c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:3f:8e:e9:44:56} reservation:<nil>}
	I1217 00:42:39.815715  292081 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a57026acfc12 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:aa:e6:32:39:49:3b} reservation:<nil>}
	I1217 00:42:39.816283  292081 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-a0b8f164bc66 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ae:bf:0f:c2:a1:7a} reservation:<nil>}
	I1217 00:42:39.817094  292081 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed8b70}
	I1217 00:42:39.817124  292081 network_create.go:124] attempt to create docker network newest-cni-653717 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1217 00:42:39.817179  292081 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-653717 newest-cni-653717
	I1217 00:42:39.867249  292081 network_create.go:108] docker network newest-cni-653717 192.168.94.0/24 created
	I1217 00:42:39.867283  292081 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-653717" container
	I1217 00:42:39.867363  292081 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 00:42:39.884952  292081 cli_runner.go:164] Run: docker volume create newest-cni-653717 --label name.minikube.sigs.k8s.io=newest-cni-653717 --label created_by.minikube.sigs.k8s.io=true
	I1217 00:42:39.903653  292081 oci.go:103] Successfully created a docker volume newest-cni-653717
	I1217 00:42:39.903740  292081 cli_runner.go:164] Run: docker run --rm --name newest-cni-653717-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-653717 --entrypoint /usr/bin/test -v newest-cni-653717:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 00:42:40.332097  292081 oci.go:107] Successfully prepared a docker volume newest-cni-653717
	I1217 00:42:40.332180  292081 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:42:40.332197  292081 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 00:42:40.332280  292081 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-653717:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 00:42:44.481201  292081 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-653717:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.148853331s)
	I1217 00:42:44.481236  292081 kic.go:203] duration metric: took 4.149035302s to extract preloaded images to volume ...
	W1217 00:42:44.481343  292081 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 00:42:44.481388  292081 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 00:42:44.481435  292081 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 00:42:42.731891  290128 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:42:42.731907  290128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:42:42.731955  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:42.763570  290128 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:42:42.763694  290128 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:42:42.763796  290128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:42:42.767548  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:42.770701  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:42.798221  290128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:42:42.895839  290128 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:42:42.898131  290128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:42:42.919421  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 00:42:42.919446  290128 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 00:42:42.924414  290128 node_ready.go:35] waiting up to 6m0s for node "no-preload-864613" to be "Ready" ...
	I1217 00:42:42.925969  290128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:42:42.957244  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 00:42:42.957271  290128 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 00:42:42.992919  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 00:42:42.992940  290128 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 00:42:43.014226  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 00:42:43.014254  290128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 00:42:43.030103  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 00:42:43.030126  290128 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 00:42:43.045016  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 00:42:43.045040  290128 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 00:42:43.058207  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 00:42:43.058229  290128 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 00:42:43.073567  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 00:42:43.073591  290128 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 00:42:43.089409  290128 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 00:42:43.089435  290128 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 00:42:43.104309  290128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 00:42:43.888759  290128 node_ready.go:49] node "no-preload-864613" is "Ready"
	I1217 00:42:43.888791  290128 node_ready.go:38] duration metric: took 964.340322ms for node "no-preload-864613" to be "Ready" ...
	I1217 00:42:43.888806  290128 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:42:43.888858  290128 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:44.730253  290128 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.832086371s)
	I1217 00:42:44.730302  290128 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.804306557s)
	I1217 00:42:44.730394  290128 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.626038593s)
	I1217 00:42:44.730467  290128 api_server.go:72] duration metric: took 2.042440177s to wait for apiserver process to appear ...
	I1217 00:42:44.730494  290128 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:42:44.730534  290128 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:42:44.732808  290128 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-864613 addons enable metrics-server
	
	I1217 00:42:44.736310  290128 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 00:42:44.736333  290128 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 00:42:44.739032  290128 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1217 00:42:44.740068  290128 addons.go:530] duration metric: took 2.051763832s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 00:42:44.554887  292081 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-653717 --name newest-cni-653717 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-653717 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-653717 --network newest-cni-653717 --ip 192.168.94.2 --volume newest-cni-653717:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 00:42:44.859934  292081 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Running}}
	I1217 00:42:44.879772  292081 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:42:44.902759  292081 cli_runner.go:164] Run: docker exec newest-cni-653717 stat /var/lib/dpkg/alternatives/iptables
	I1217 00:42:44.958456  292081 oci.go:144] the created container "newest-cni-653717" has a running status.
	I1217 00:42:44.958499  292081 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa...
	I1217 00:42:45.146969  292081 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 00:42:45.178425  292081 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:42:45.205673  292081 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 00:42:45.205749  292081 kic_runner.go:114] Args: [docker exec --privileged newest-cni-653717 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 00:42:45.272222  292081 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:42:45.302920  292081 machine.go:94] provisionDockerMachine start ...
	I1217 00:42:45.303079  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:45.332494  292081 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:45.332879  292081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1217 00:42:45.332905  292081 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:42:45.470045  292081 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-653717
	
	I1217 00:42:45.470072  292081 ubuntu.go:182] provisioning hostname "newest-cni-653717"
	I1217 00:42:45.470145  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:45.489669  292081 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:45.489903  292081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1217 00:42:45.489921  292081 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-653717 && echo "newest-cni-653717" | sudo tee /etc/hostname
	I1217 00:42:45.644161  292081 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-653717
	
	I1217 00:42:45.644290  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:45.670660  292081 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:45.670959  292081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1217 00:42:45.671001  292081 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-653717' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-653717/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-653717' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:42:45.810630  292081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:42:45.810662  292081 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:42:45.810686  292081 ubuntu.go:190] setting up certificates
	I1217 00:42:45.810696  292081 provision.go:84] configureAuth start
	I1217 00:42:45.810765  292081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-653717
	I1217 00:42:45.829459  292081 provision.go:143] copyHostCerts
	I1217 00:42:45.829525  292081 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:42:45.829539  292081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:42:45.829631  292081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:42:45.829741  292081 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:42:45.829751  292081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:42:45.829780  292081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:42:45.829850  292081 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:42:45.829858  292081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:42:45.829882  292081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:42:45.829934  292081 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.newest-cni-653717 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-653717]
	I1217 00:42:45.958055  292081 provision.go:177] copyRemoteCerts
	I1217 00:42:45.958127  292081 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:42:45.958174  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:45.984112  292081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:42:46.086012  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:42:46.104624  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:42:46.121927  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:42:46.138837  292081 provision.go:87] duration metric: took 328.114013ms to configureAuth
	I1217 00:42:46.138862  292081 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:42:46.139071  292081 config.go:182] Loaded profile config "newest-cni-653717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:42:46.139186  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:46.157087  292081 main.go:143] libmachine: Using SSH client type: native
	I1217 00:42:46.157347  292081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1217 00:42:46.157376  292081 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:42:46.424454  292081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:42:46.424492  292081 machine.go:97] duration metric: took 1.121525305s to provisionDockerMachine
	I1217 00:42:46.424503  292081 client.go:176] duration metric: took 6.666411162s to LocalClient.Create
	I1217 00:42:46.424518  292081 start.go:167] duration metric: took 6.666478769s to libmachine.API.Create "newest-cni-653717"
	I1217 00:42:46.424527  292081 start.go:293] postStartSetup for "newest-cni-653717" (driver="docker")
	I1217 00:42:46.424540  292081 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:42:46.424592  292081 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:42:46.424624  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:46.442796  292081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:42:46.536618  292081 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:42:46.540051  292081 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:42:46.540072  292081 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:42:46.540082  292081 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:42:46.540139  292081 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:42:46.540216  292081 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:42:46.540306  292081 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:42:46.547511  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:42:46.567395  292081 start.go:296] duration metric: took 142.85649ms for postStartSetup
	I1217 00:42:46.567722  292081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-653717
	I1217 00:42:46.586027  292081 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/config.json ...
	I1217 00:42:46.586297  292081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:42:46.586350  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:46.604529  292081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:42:46.695141  292081 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:42:46.700409  292081 start.go:128] duration metric: took 6.945052111s to createHost
	I1217 00:42:46.700434  292081 start.go:83] releasing machines lock for "newest-cni-653717", held for 6.94528556s
	I1217 00:42:46.700506  292081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-653717
	I1217 00:42:46.719971  292081 ssh_runner.go:195] Run: cat /version.json
	I1217 00:42:46.720049  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:46.720057  292081 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:42:46.720124  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:42:46.738390  292081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:42:46.738747  292081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:42:46.882381  292081 ssh_runner.go:195] Run: systemctl --version
	I1217 00:42:46.888882  292081 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:42:46.924064  292081 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:42:46.928655  292081 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:42:46.928703  292081 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:42:46.953084  292081 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 00:42:46.953107  292081 start.go:496] detecting cgroup driver to use...
	I1217 00:42:46.953139  292081 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:42:46.953190  292081 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:42:46.969605  292081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:42:46.981627  292081 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:42:46.981696  292081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:42:46.997969  292081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:42:47.015481  292081 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:42:47.102372  292081 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:42:47.195858  292081 docker.go:234] disabling docker service ...
	I1217 00:42:47.195927  292081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:42:47.214755  292081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:42:47.228327  292081 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:42:47.313282  292081 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:42:47.402263  292081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:42:47.415123  292081 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:42:47.429297  292081 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:42:47.429343  292081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:47.439140  292081 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:42:47.439181  292081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:47.447551  292081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:47.456120  292081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:47.464976  292081 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:42:47.472532  292081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:47.480749  292081 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:47.494427  292081 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:42:47.502977  292081 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:42:47.510020  292081 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:42:47.517810  292081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:47.603008  292081 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:42:47.760326  292081 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:42:47.760395  292081 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:42:47.764833  292081 start.go:564] Will wait 60s for crictl version
	I1217 00:42:47.764898  292081 ssh_runner.go:195] Run: which crictl
	I1217 00:42:47.768771  292081 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:42:47.794033  292081 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:42:47.794112  292081 ssh_runner.go:195] Run: crio --version
	I1217 00:42:47.825452  292081 ssh_runner.go:195] Run: crio --version
	I1217 00:42:47.857636  292081 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1217 00:42:47.858656  292081 cli_runner.go:164] Run: docker network inspect newest-cni-653717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:42:47.876540  292081 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1217 00:42:47.880551  292081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:42:47.891708  292081 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1217 00:42:43.579985  284412 node_ready.go:57] node "default-k8s-diff-port-414413" has "Ready":"False" status (will retry)
	W1217 00:42:45.580315  284412 node_ready.go:57] node "default-k8s-diff-port-414413" has "Ready":"False" status (will retry)
	W1217 00:42:47.580369  284412 node_ready.go:57] node "default-k8s-diff-port-414413" has "Ready":"False" status (will retry)
	I1217 00:42:47.892665  292081 kubeadm.go:884] updating cluster {Name:newest-cni-653717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-653717 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:42:47.892819  292081 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:42:47.892873  292081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:42:47.922682  292081 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:42:47.922702  292081 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:42:47.922742  292081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:42:47.948548  292081 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:42:47.948566  292081 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:42:47.948572  292081 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1217 00:42:47.948644  292081 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-653717 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-653717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:42:47.948706  292081 ssh_runner.go:195] Run: crio config
	I1217 00:42:47.998076  292081 cni.go:84] Creating CNI manager for ""
	I1217 00:42:47.998103  292081 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:42:47.998123  292081 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 00:42:47.998153  292081 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-653717 NodeName:newest-cni-653717 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:42:47.998316  292081 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-653717"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:42:47.998384  292081 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:42:48.008451  292081 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:42:48.008505  292081 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:42:48.018068  292081 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1217 00:42:48.032092  292081 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:42:48.046963  292081 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1217 00:42:48.058965  292081 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:42:48.062632  292081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:42:48.072208  292081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:42:48.155827  292081 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:42:48.181149  292081 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717 for IP: 192.168.94.2
	I1217 00:42:48.181168  292081 certs.go:195] generating shared ca certs ...
	I1217 00:42:48.181185  292081 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:48.181315  292081 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:42:48.181355  292081 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:42:48.181365  292081 certs.go:257] generating profile certs ...
	I1217 00:42:48.181431  292081 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/client.key
	I1217 00:42:48.181455  292081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/client.crt with IP's: []
	I1217 00:42:48.204435  292081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/client.crt ...
	I1217 00:42:48.204457  292081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/client.crt: {Name:mk706a547645679cf593c6b6b64a5b13d6509c3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:48.204624  292081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/client.key ...
	I1217 00:42:48.204643  292081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/client.key: {Name:mk2afcb3a7b31c81f1f103ac537112f286b679a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:48.204746  292081 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.key.17c07d81
	I1217 00:42:48.204762  292081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.crt.17c07d81 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1217 00:42:48.250524  292081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.crt.17c07d81 ...
	I1217 00:42:48.250546  292081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.crt.17c07d81: {Name:mk9b44a0d7e2e4ebfad604c15171baaa270cfc11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:48.250684  292081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.key.17c07d81 ...
	I1217 00:42:48.250696  292081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.key.17c07d81: {Name:mk49169f7d724cca6994caea611fcf0ceba24cbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:48.250766  292081 certs.go:382] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.crt.17c07d81 -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.crt
	I1217 00:42:48.250832  292081 certs.go:386] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.key.17c07d81 -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.key
	I1217 00:42:48.250890  292081 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.key
	I1217 00:42:48.250905  292081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.crt with IP's: []
	I1217 00:42:48.311073  292081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.crt ...
	I1217 00:42:48.311096  292081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.crt: {Name:mk191e919f78ff769818c78eee7f416c2b6c7966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:48.311228  292081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.key ...
	I1217 00:42:48.311240  292081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.key: {Name:mk28eaa7bd38fac072b93b2b9e0af2cc79a6b0d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:42:48.311403  292081 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:42:48.311447  292081 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:42:48.311462  292081 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:42:48.311499  292081 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:42:48.311527  292081 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:42:48.311550  292081 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:42:48.311593  292081 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:42:48.312140  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:42:48.330195  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:42:48.346815  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:42:48.363297  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:42:48.380352  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:42:48.396979  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 00:42:48.413283  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:42:48.429487  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:42:48.446870  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:42:48.465676  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:42:48.482658  292081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:42:48.499956  292081 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:42:48.511878  292081 ssh_runner.go:195] Run: openssl version
	I1217 00:42:48.517834  292081 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:42:48.524686  292081 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:42:48.531652  292081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:42:48.535189  292081 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:42:48.535244  292081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	I1217 00:42:48.572541  292081 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:42:48.580505  292081 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/163542.pem /etc/ssl/certs/3ec20f2e.0
	I1217 00:42:48.587403  292081 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:48.594664  292081 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:42:48.602161  292081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:48.605749  292081 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:48.605792  292081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:42:48.647175  292081 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:42:48.654933  292081 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 00:42:48.662929  292081 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16354.pem
	I1217 00:42:48.670361  292081 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16354.pem /etc/ssl/certs/16354.pem
	I1217 00:42:48.677425  292081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16354.pem
	I1217 00:42:48.680914  292081 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:13 /usr/share/ca-certificates/16354.pem
	I1217 00:42:48.680965  292081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16354.pem
	I1217 00:42:48.717977  292081 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:42:48.725584  292081 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16354.pem /etc/ssl/certs/51391683.0
	I1217 00:42:48.733342  292081 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:42:48.737247  292081 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 00:42:48.737296  292081 kubeadm.go:401] StartCluster: {Name:newest-cni-653717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-653717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:42:48.737379  292081 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:42:48.737429  292081 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:42:48.766866  292081 cri.go:89] found id: ""
	I1217 00:42:48.766920  292081 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:42:48.775388  292081 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:42:48.784570  292081 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:42:48.784637  292081 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:42:48.794346  292081 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:42:48.794366  292081 kubeadm.go:158] found existing configuration files:
	
	I1217 00:42:48.794414  292081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 00:42:48.804623  292081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:42:48.804684  292081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:42:48.814188  292081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 00:42:48.824205  292081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:42:48.824260  292081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:42:48.833632  292081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 00:42:48.843633  292081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:42:48.843687  292081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:42:48.852733  292081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 00:42:48.863156  292081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:42:48.863217  292081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:42:48.871629  292081 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:42:48.918628  292081 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 00:42:48.918706  292081 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:42:49.012576  292081 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:42:49.012694  292081 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 00:42:49.012779  292081 kubeadm.go:319] OS: Linux
	I1217 00:42:49.012850  292081 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:42:49.012934  292081 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:42:49.012981  292081 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:42:49.013068  292081 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:42:49.013147  292081 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:42:49.013231  292081 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:42:49.013306  292081 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:42:49.013350  292081 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 00:42:49.079170  292081 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:42:49.079317  292081 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:42:49.079463  292081 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:42:49.089927  292081 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:42:49.092958  292081 out.go:252]   - Generating certificates and keys ...
	I1217 00:42:49.093071  292081 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:42:49.093173  292081 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:42:49.222054  292081 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 00:42:49.258747  292081 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 00:42:49.400834  292081 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 00:42:49.535425  292081 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 00:42:45.231179  290128 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:42:45.239052  290128 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1217 00:42:45.240536  290128 api_server.go:141] control plane version: v1.35.0-beta.0
	I1217 00:42:45.240626  290128 api_server.go:131] duration metric: took 510.122414ms to wait for apiserver health ...
	I1217 00:42:45.240667  290128 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:42:45.245445  290128 system_pods.go:59] 8 kube-system pods found
	I1217 00:42:45.245514  290128 system_pods.go:61] "coredns-7d764666f9-6ql6r" [7fe29911-eb02-4cea-b42b-254fe65a4e65] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:45.245533  290128 system_pods.go:61] "etcd-no-preload-864613" [2cd02c45-52c1-43f0-8160-939b70247653] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 00:42:45.245594  290128 system_pods.go:61] "kindnet-bpf4x" [0b42df61-fef2-41ff-83f3-0abede84a5fb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 00:42:45.245612  290128 system_pods.go:61] "kube-apiserver-no-preload-864613" [039d37cf-0e0f-45fa-9d35-a0a4deb68c2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 00:42:45.245619  290128 system_pods.go:61] "kube-controller-manager-no-preload-864613" [bb99a38a-1b12-43f0-b562-96bca9e3f8fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 00:42:45.245625  290128 system_pods.go:61] "kube-proxy-2kddk" [7153c193-9583-4abd-a828-ec1dc91151e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 00:42:45.245630  290128 system_pods.go:61] "kube-scheduler-no-preload-864613" [10f61f47-8e53-41ce-b820-7e662dd29fcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 00:42:45.245675  290128 system_pods.go:61] "storage-provisioner" [bf26b73d-473d-43a0-bf42-4d69abdd9e31] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:45.245778  290128 system_pods.go:74] duration metric: took 5.02268ms to wait for pod list to return data ...
	I1217 00:42:45.245808  290128 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:42:45.249871  290128 default_sa.go:45] found service account: "default"
	I1217 00:42:45.249896  290128 default_sa.go:55] duration metric: took 4.070194ms for default service account to be created ...
	I1217 00:42:45.249909  290128 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 00:42:45.254534  290128 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:45.254572  290128 system_pods.go:89] "coredns-7d764666f9-6ql6r" [7fe29911-eb02-4cea-b42b-254fe65a4e65] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:45.254585  290128 system_pods.go:89] "etcd-no-preload-864613" [2cd02c45-52c1-43f0-8160-939b70247653] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 00:42:45.254594  290128 system_pods.go:89] "kindnet-bpf4x" [0b42df61-fef2-41ff-83f3-0abede84a5fb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 00:42:45.254603  290128 system_pods.go:89] "kube-apiserver-no-preload-864613" [039d37cf-0e0f-45fa-9d35-a0a4deb68c2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 00:42:45.254612  290128 system_pods.go:89] "kube-controller-manager-no-preload-864613" [bb99a38a-1b12-43f0-b562-96bca9e3f8fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 00:42:45.254620  290128 system_pods.go:89] "kube-proxy-2kddk" [7153c193-9583-4abd-a828-ec1dc91151e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 00:42:45.254634  290128 system_pods.go:89] "kube-scheduler-no-preload-864613" [10f61f47-8e53-41ce-b820-7e662dd29fcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 00:42:45.254647  290128 system_pods.go:89] "storage-provisioner" [bf26b73d-473d-43a0-bf42-4d69abdd9e31] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:45.254656  290128 system_pods.go:126] duration metric: took 4.73972ms to wait for k8s-apps to be running ...
	I1217 00:42:45.254666  290128 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 00:42:45.254716  290128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:42:45.275259  290128 system_svc.go:56] duration metric: took 20.587102ms WaitForService to wait for kubelet
	I1217 00:42:45.275297  290128 kubeadm.go:587] duration metric: took 2.587270544s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:42:45.275318  290128 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:42:45.285140  290128 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 00:42:45.285176  290128 node_conditions.go:123] node cpu capacity is 8
	I1217 00:42:45.285194  290128 node_conditions.go:105] duration metric: took 9.870357ms to run NodePressure ...
	I1217 00:42:45.285208  290128 start.go:242] waiting for startup goroutines ...
	I1217 00:42:45.285219  290128 start.go:247] waiting for cluster config update ...
	I1217 00:42:45.285233  290128 start.go:256] writing updated cluster config ...
	I1217 00:42:45.285542  290128 ssh_runner.go:195] Run: rm -f paused
	I1217 00:42:45.292170  290128 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:42:45.297160  290128 pod_ready.go:83] waiting for pod "coredns-7d764666f9-6ql6r" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 00:42:47.302980  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	W1217 00:42:49.303419  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	I1217 00:42:49.875900  292081 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 00:42:49.876099  292081 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-653717] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 00:42:49.960694  292081 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 00:42:49.960901  292081 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-653717] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 00:42:49.986333  292081 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 00:42:50.038475  292081 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 00:42:50.210231  292081 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 00:42:50.210371  292081 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:42:50.371871  292081 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:42:50.467844  292081 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:42:50.524877  292081 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:42:50.559110  292081 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:42:50.627240  292081 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:42:50.627953  292081 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:42:50.635874  292081 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1217 00:42:50.080796  284412 node_ready.go:57] node "default-k8s-diff-port-414413" has "Ready":"False" status (will retry)
	W1217 00:42:52.081883  284412 node_ready.go:57] node "default-k8s-diff-port-414413" has "Ready":"False" status (will retry)
	I1217 00:42:52.580309  284412 node_ready.go:49] node "default-k8s-diff-port-414413" is "Ready"
	I1217 00:42:52.580348  284412 node_ready.go:38] duration metric: took 11.003228991s for node "default-k8s-diff-port-414413" to be "Ready" ...
	I1217 00:42:52.580365  284412 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:42:52.580426  284412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:52.601898  284412 api_server.go:72] duration metric: took 11.408207765s to wait for apiserver process to appear ...
	I1217 00:42:52.601924  284412 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:42:52.601942  284412 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1217 00:42:52.614364  284412 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1217 00:42:52.619775  284412 api_server.go:141] control plane version: v1.34.2
	I1217 00:42:52.619802  284412 api_server.go:131] duration metric: took 17.87077ms to wait for apiserver health ...
	I1217 00:42:52.619820  284412 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:42:52.628928  284412 system_pods.go:59] 8 kube-system pods found
	I1217 00:42:52.629032  284412 system_pods.go:61] "coredns-66bc5c9577-v76f4" [1370bcd6-f828-4ed0-af58-d2d87c7044bd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:52.629078  284412 system_pods.go:61] "etcd-default-k8s-diff-port-414413" [286460a9-8a6c-4939-a2a0-0d5b31620d9a] Running
	I1217 00:42:52.629104  284412 system_pods.go:61] "kindnet-hxhbf" [a4c2ed1b-ad48-484e-b779-4b93f3a72d0b] Running
	I1217 00:42:52.629119  284412 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-414413" [aa792fc5-63c2-4287-802e-c99c70a9ab2d] Running
	I1217 00:42:52.629134  284412 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-414413" [e9a02305-5b73-4867-8605-48c8202cf5dd] Running
	I1217 00:42:52.629148  284412 system_pods.go:61] "kube-proxy-prlkw" [9a4571d0-7682-4838-aeb3-ccb4480157b8] Running
	I1217 00:42:52.629162  284412 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-414413" [a71da427-5b35-43f4-827b-62a96fdfda42] Running
	I1217 00:42:52.629194  284412 system_pods.go:61] "storage-provisioner" [0405b749-23a9-4449-90ac-59daf539647b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:52.629205  284412 system_pods.go:74] duration metric: took 9.377067ms to wait for pod list to return data ...
	I1217 00:42:52.629215  284412 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:42:52.632715  284412 default_sa.go:45] found service account: "default"
	I1217 00:42:52.632773  284412 default_sa.go:55] duration metric: took 3.551355ms for default service account to be created ...
	I1217 00:42:52.632809  284412 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 00:42:52.638857  284412 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:52.638886  284412 system_pods.go:89] "coredns-66bc5c9577-v76f4" [1370bcd6-f828-4ed0-af58-d2d87c7044bd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:52.638894  284412 system_pods.go:89] "etcd-default-k8s-diff-port-414413" [286460a9-8a6c-4939-a2a0-0d5b31620d9a] Running
	I1217 00:42:52.638902  284412 system_pods.go:89] "kindnet-hxhbf" [a4c2ed1b-ad48-484e-b779-4b93f3a72d0b] Running
	I1217 00:42:52.638908  284412 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-414413" [aa792fc5-63c2-4287-802e-c99c70a9ab2d] Running
	I1217 00:42:52.638914  284412 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-414413" [e9a02305-5b73-4867-8605-48c8202cf5dd] Running
	I1217 00:42:52.638919  284412 system_pods.go:89] "kube-proxy-prlkw" [9a4571d0-7682-4838-aeb3-ccb4480157b8] Running
	I1217 00:42:52.638924  284412 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-414413" [a71da427-5b35-43f4-827b-62a96fdfda42] Running
	I1217 00:42:52.638931  284412 system_pods.go:89] "storage-provisioner" [0405b749-23a9-4449-90ac-59daf539647b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:52.638959  284412 retry.go:31] will retry after 286.683057ms: missing components: kube-dns
	I1217 00:42:52.932155  284412 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:52.932199  284412 system_pods.go:89] "coredns-66bc5c9577-v76f4" [1370bcd6-f828-4ed0-af58-d2d87c7044bd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:52.932207  284412 system_pods.go:89] "etcd-default-k8s-diff-port-414413" [286460a9-8a6c-4939-a2a0-0d5b31620d9a] Running
	I1217 00:42:52.932215  284412 system_pods.go:89] "kindnet-hxhbf" [a4c2ed1b-ad48-484e-b779-4b93f3a72d0b] Running
	I1217 00:42:52.932220  284412 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-414413" [aa792fc5-63c2-4287-802e-c99c70a9ab2d] Running
	I1217 00:42:52.932233  284412 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-414413" [e9a02305-5b73-4867-8605-48c8202cf5dd] Running
	I1217 00:42:52.932238  284412 system_pods.go:89] "kube-proxy-prlkw" [9a4571d0-7682-4838-aeb3-ccb4480157b8] Running
	I1217 00:42:52.932244  284412 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-414413" [a71da427-5b35-43f4-827b-62a96fdfda42] Running
	I1217 00:42:52.932250  284412 system_pods.go:89] "storage-provisioner" [0405b749-23a9-4449-90ac-59daf539647b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:52.932266  284412 retry.go:31] will retry after 256.870822ms: missing components: kube-dns
	I1217 00:42:53.193952  284412 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:53.194020  284412 system_pods.go:89] "coredns-66bc5c9577-v76f4" [1370bcd6-f828-4ed0-af58-d2d87c7044bd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:53.194031  284412 system_pods.go:89] "etcd-default-k8s-diff-port-414413" [286460a9-8a6c-4939-a2a0-0d5b31620d9a] Running
	I1217 00:42:53.194039  284412 system_pods.go:89] "kindnet-hxhbf" [a4c2ed1b-ad48-484e-b779-4b93f3a72d0b] Running
	I1217 00:42:53.194046  284412 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-414413" [aa792fc5-63c2-4287-802e-c99c70a9ab2d] Running
	I1217 00:42:53.194052  284412 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-414413" [e9a02305-5b73-4867-8605-48c8202cf5dd] Running
	I1217 00:42:53.194061  284412 system_pods.go:89] "kube-proxy-prlkw" [9a4571d0-7682-4838-aeb3-ccb4480157b8] Running
	I1217 00:42:53.194066  284412 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-414413" [a71da427-5b35-43f4-827b-62a96fdfda42] Running
	I1217 00:42:53.194071  284412 system_pods.go:89] "storage-provisioner" [0405b749-23a9-4449-90ac-59daf539647b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:53.194090  284412 retry.go:31] will retry after 397.719289ms: missing components: kube-dns
	I1217 00:42:50.637611  292081 out.go:252]   - Booting up control plane ...
	I1217 00:42:50.637750  292081 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:42:50.637839  292081 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:42:50.638890  292081 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:42:50.658581  292081 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:42:50.658710  292081 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:42:50.668112  292081 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:42:50.668403  292081 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:42:50.668563  292081 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:42:50.812447  292081 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:42:50.812638  292081 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:42:51.313921  292081 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.552791ms
	I1217 00:42:51.316755  292081 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 00:42:51.316901  292081 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1217 00:42:51.317065  292081 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 00:42:51.317186  292081 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 00:42:52.825289  292081 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.508380882s
	I1217 00:42:54.252053  292081 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.934578503s
	W1217 00:42:51.808336  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	W1217 00:42:54.305562  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	I1217 00:42:53.597204  284412 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:53.597239  284412 system_pods.go:89] "coredns-66bc5c9577-v76f4" [1370bcd6-f828-4ed0-af58-d2d87c7044bd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:42:53.597249  284412 system_pods.go:89] "etcd-default-k8s-diff-port-414413" [286460a9-8a6c-4939-a2a0-0d5b31620d9a] Running
	I1217 00:42:53.597258  284412 system_pods.go:89] "kindnet-hxhbf" [a4c2ed1b-ad48-484e-b779-4b93f3a72d0b] Running
	I1217 00:42:53.597269  284412 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-414413" [aa792fc5-63c2-4287-802e-c99c70a9ab2d] Running
	I1217 00:42:53.597275  284412 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-414413" [e9a02305-5b73-4867-8605-48c8202cf5dd] Running
	I1217 00:42:53.597287  284412 system_pods.go:89] "kube-proxy-prlkw" [9a4571d0-7682-4838-aeb3-ccb4480157b8] Running
	I1217 00:42:53.597293  284412 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-414413" [a71da427-5b35-43f4-827b-62a96fdfda42] Running
	I1217 00:42:53.597299  284412 system_pods.go:89] "storage-provisioner" [0405b749-23a9-4449-90ac-59daf539647b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:42:53.597317  284412 retry.go:31] will retry after 468.383665ms: missing components: kube-dns
	I1217 00:42:54.073424  284412 system_pods.go:86] 8 kube-system pods found
	I1217 00:42:54.073462  284412 system_pods.go:89] "coredns-66bc5c9577-v76f4" [1370bcd6-f828-4ed0-af58-d2d87c7044bd] Running
	I1217 00:42:54.073470  284412 system_pods.go:89] "etcd-default-k8s-diff-port-414413" [286460a9-8a6c-4939-a2a0-0d5b31620d9a] Running
	I1217 00:42:54.073477  284412 system_pods.go:89] "kindnet-hxhbf" [a4c2ed1b-ad48-484e-b779-4b93f3a72d0b] Running
	I1217 00:42:54.073482  284412 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-414413" [aa792fc5-63c2-4287-802e-c99c70a9ab2d] Running
	I1217 00:42:54.073490  284412 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-414413" [e9a02305-5b73-4867-8605-48c8202cf5dd] Running
	I1217 00:42:54.073499  284412 system_pods.go:89] "kube-proxy-prlkw" [9a4571d0-7682-4838-aeb3-ccb4480157b8] Running
	I1217 00:42:54.073505  284412 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-414413" [a71da427-5b35-43f4-827b-62a96fdfda42] Running
	I1217 00:42:54.073517  284412 system_pods.go:89] "storage-provisioner" [0405b749-23a9-4449-90ac-59daf539647b] Running
	I1217 00:42:54.073526  284412 system_pods.go:126] duration metric: took 1.440699726s to wait for k8s-apps to be running ...
	I1217 00:42:54.073542  284412 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 00:42:54.073591  284412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:42:54.097035  284412 system_svc.go:56] duration metric: took 23.442729ms WaitForService to wait for kubelet
	I1217 00:42:54.097064  284412 kubeadm.go:587] duration metric: took 12.903378654s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:42:54.097095  284412 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:42:54.102295  284412 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 00:42:54.102323  284412 node_conditions.go:123] node cpu capacity is 8
	I1217 00:42:54.102349  284412 node_conditions.go:105] duration metric: took 5.238889ms to run NodePressure ...
	I1217 00:42:54.102363  284412 start.go:242] waiting for startup goroutines ...
	I1217 00:42:54.102374  284412 start.go:247] waiting for cluster config update ...
	I1217 00:42:54.102387  284412 start.go:256] writing updated cluster config ...
	I1217 00:42:54.107309  284412 ssh_runner.go:195] Run: rm -f paused
	I1217 00:42:54.115453  284412 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:42:54.122663  284412 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-v76f4" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:54.128501  284412 pod_ready.go:94] pod "coredns-66bc5c9577-v76f4" is "Ready"
	I1217 00:42:54.128579  284412 pod_ready.go:86] duration metric: took 5.828834ms for pod "coredns-66bc5c9577-v76f4" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:54.132687  284412 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-414413" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:54.139385  284412 pod_ready.go:94] pod "etcd-default-k8s-diff-port-414413" is "Ready"
	I1217 00:42:54.139408  284412 pod_ready.go:86] duration metric: took 6.667735ms for pod "etcd-default-k8s-diff-port-414413" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:54.141609  284412 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-414413" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:54.145840  284412 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-414413" is "Ready"
	I1217 00:42:54.145865  284412 pod_ready.go:86] duration metric: took 4.236076ms for pod "kube-apiserver-default-k8s-diff-port-414413" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:54.149098  284412 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-414413" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:54.521142  284412 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-414413" is "Ready"
	I1217 00:42:54.521239  284412 pod_ready.go:86] duration metric: took 372.112472ms for pod "kube-controller-manager-default-k8s-diff-port-414413" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:54.722142  284412 pod_ready.go:83] waiting for pod "kube-proxy-prlkw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:55.121661  284412 pod_ready.go:94] pod "kube-proxy-prlkw" is "Ready"
	I1217 00:42:55.121687  284412 pod_ready.go:86] duration metric: took 399.517411ms for pod "kube-proxy-prlkw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:55.321591  284412 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-414413" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:55.720954  284412 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-414413" is "Ready"
	I1217 00:42:55.720984  284412 pod_ready.go:86] duration metric: took 399.365325ms for pod "kube-scheduler-default-k8s-diff-port-414413" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:42:55.721018  284412 pod_ready.go:40] duration metric: took 1.605397099s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:42:55.763713  284412 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1217 00:42:55.765254  284412 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-414413" cluster and "default" namespace by default
	I1217 00:42:55.818485  292081 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501663396s
	I1217 00:42:55.839463  292081 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 00:42:55.850270  292081 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 00:42:55.859337  292081 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 00:42:55.859656  292081 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-653717 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 00:42:55.869331  292081 kubeadm.go:319] [bootstrap-token] Using token: xq2phg.ktr5edtc91gmhzse
	I1217 00:42:55.870386  292081 out.go:252]   - Configuring RBAC rules ...
	I1217 00:42:55.870484  292081 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 00:42:55.875045  292081 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 00:42:55.880634  292081 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 00:42:55.883504  292081 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 00:42:55.886341  292081 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 00:42:55.890047  292081 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 00:42:56.226107  292081 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 00:42:56.640330  292081 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 00:42:57.225156  292081 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 00:42:57.226195  292081 kubeadm.go:319] 
	I1217 00:42:57.226269  292081 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 00:42:57.226277  292081 kubeadm.go:319] 
	I1217 00:42:57.226356  292081 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 00:42:57.226362  292081 kubeadm.go:319] 
	I1217 00:42:57.226384  292081 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 00:42:57.226480  292081 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 00:42:57.226598  292081 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 00:42:57.226616  292081 kubeadm.go:319] 
	I1217 00:42:57.226703  292081 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 00:42:57.226717  292081 kubeadm.go:319] 
	I1217 00:42:57.226785  292081 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 00:42:57.226798  292081 kubeadm.go:319] 
	I1217 00:42:57.226881  292081 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 00:42:57.227022  292081 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 00:42:57.227145  292081 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 00:42:57.227178  292081 kubeadm.go:319] 
	I1217 00:42:57.227302  292081 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 00:42:57.227423  292081 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 00:42:57.227438  292081 kubeadm.go:319] 
	I1217 00:42:57.227564  292081 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token xq2phg.ktr5edtc91gmhzse \
	I1217 00:42:57.227719  292081 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a7c34974519aee4953e03245da076d7a2eba06e40135880a85806e2dab303fa1 \
	I1217 00:42:57.227749  292081 kubeadm.go:319] 	--control-plane 
	I1217 00:42:57.227759  292081 kubeadm.go:319] 
	I1217 00:42:57.227850  292081 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 00:42:57.227860  292081 kubeadm.go:319] 
	I1217 00:42:57.227932  292081 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xq2phg.ktr5edtc91gmhzse \
	I1217 00:42:57.228080  292081 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a7c34974519aee4953e03245da076d7a2eba06e40135880a85806e2dab303fa1 
	I1217 00:42:57.230536  292081 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 00:42:57.230681  292081 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:42:57.230702  292081 cni.go:84] Creating CNI manager for ""
	I1217 00:42:57.230709  292081 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:42:57.232260  292081 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 00:42:57.233377  292081 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 00:42:57.237982  292081 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1217 00:42:57.238010  292081 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1217 00:42:57.251056  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 00:42:57.457147  292081 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 00:42:57.457215  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:57.457244  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-653717 minikube.k8s.io/updated_at=2025_12_17T00_42_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1 minikube.k8s.io/name=newest-cni-653717 minikube.k8s.io/primary=true
	I1217 00:42:57.466893  292081 ops.go:34] apiserver oom_adj: -16
	I1217 00:42:57.539907  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:58.040174  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:58.540174  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:59.040156  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:42:59.541018  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1217 00:42:56.803720  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	W1217 00:42:59.302506  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	I1217 00:43:00.040549  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:00.540155  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:01.040145  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:01.540016  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:02.040137  292081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:02.118174  292081 kubeadm.go:1114] duration metric: took 4.661014005s to wait for elevateKubeSystemPrivileges
	I1217 00:43:02.118212  292081 kubeadm.go:403] duration metric: took 13.3809193s to StartCluster
	I1217 00:43:02.118233  292081 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:02.118312  292081 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:43:02.120829  292081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:02.121226  292081 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 00:43:02.121245  292081 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:43:02.121324  292081 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:43:02.121418  292081 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-653717"
	I1217 00:43:02.121438  292081 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-653717"
	I1217 00:43:02.121436  292081 config.go:182] Loaded profile config "newest-cni-653717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:43:02.121469  292081 host.go:66] Checking if "newest-cni-653717" exists ...
	I1217 00:43:02.121488  292081 addons.go:70] Setting default-storageclass=true in profile "newest-cni-653717"
	I1217 00:43:02.121503  292081 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-653717"
	I1217 00:43:02.121916  292081 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:43:02.122170  292081 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:43:02.122912  292081 out.go:179] * Verifying Kubernetes components...
	I1217 00:43:02.124175  292081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:02.146305  292081 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:43:02.147710  292081 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:43:02.147731  292081 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:43:02.147787  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:02.148293  292081 addons.go:239] Setting addon default-storageclass=true in "newest-cni-653717"
	I1217 00:43:02.148333  292081 host.go:66] Checking if "newest-cni-653717" exists ...
	I1217 00:43:02.148901  292081 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:43:02.179054  292081 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:43:02.179080  292081 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:43:02.179142  292081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:02.182767  292081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:43:02.204665  292081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:43:02.226181  292081 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 00:43:02.291907  292081 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:43:02.303842  292081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:43:02.313910  292081 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:43:02.415698  292081 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1217 00:43:02.417709  292081 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:43:02.417770  292081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:02.595714  292081 api_server.go:72] duration metric: took 474.434249ms to wait for apiserver process to appear ...
	I1217 00:43:02.595741  292081 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:43:02.595765  292081 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 00:43:02.601058  292081 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1217 00:43:02.601705  292081 api_server.go:141] control plane version: v1.35.0-beta.0
	I1217 00:43:02.601724  292081 api_server.go:131] duration metric: took 5.976027ms to wait for apiserver health ...
	I1217 00:43:02.601732  292081 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:43:02.602188  292081 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 00:43:02.603119  292081 addons.go:530] duration metric: took 481.791707ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 00:43:02.603985  292081 system_pods.go:59] 8 kube-system pods found
	I1217 00:43:02.604034  292081 system_pods.go:61] "coredns-7d764666f9-djwjl" [741342b4-626d-4282-ba19-0e8b37eb2556] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 00:43:02.604041  292081 system_pods.go:61] "etcd-newest-cni-653717" [8210d4d5-f66f-43fe-b160-e85265f0dcd0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 00:43:02.604047  292081 system_pods.go:61] "kindnet-xmw8c" [7688d3d1-e8d9-4b27-bd63-412f8972c114] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 00:43:02.604053  292081 system_pods.go:61] "kube-apiserver-newest-cni-653717" [2a8f1a0d-5c29-49c7-b857-e82bc22e048f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 00:43:02.604058  292081 system_pods.go:61] "kube-controller-manager-newest-cni-653717" [d368a2d6-d0bf-4119-982a-d08d313d1433] Running
	I1217 00:43:02.604063  292081 system_pods.go:61] "kube-proxy-9jd8t" [e7d2bcca-b703-4fd2-9af0-c08825a47e85] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 00:43:02.604067  292081 system_pods.go:61] "kube-scheduler-newest-cni-653717" [f17c94c7-8363-4f0d-a31c-6db9a2b0f14c] Running
	I1217 00:43:02.604071  292081 system_pods.go:61] "storage-provisioner" [e5c636ed-8536-4f92-8033-757cda2e5a8e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 00:43:02.604076  292081 system_pods.go:74] duration metric: took 2.338809ms to wait for pod list to return data ...
	I1217 00:43:02.604084  292081 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:43:02.605876  292081 default_sa.go:45] found service account: "default"
	I1217 00:43:02.605895  292081 default_sa.go:55] duration metric: took 1.805946ms for default service account to be created ...
	I1217 00:43:02.605903  292081 kubeadm.go:587] duration metric: took 484.627665ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 00:43:02.605916  292081 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:43:02.607656  292081 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 00:43:02.607674  292081 node_conditions.go:123] node cpu capacity is 8
	I1217 00:43:02.607685  292081 node_conditions.go:105] duration metric: took 1.765023ms to run NodePressure ...
	I1217 00:43:02.607695  292081 start.go:242] waiting for startup goroutines ...
	I1217 00:43:02.920711  292081 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-653717" context rescaled to 1 replicas
	I1217 00:43:02.920758  292081 start.go:247] waiting for cluster config update ...
	I1217 00:43:02.920775  292081 start.go:256] writing updated cluster config ...
	I1217 00:43:02.921106  292081 ssh_runner.go:195] Run: rm -f paused
	I1217 00:43:02.971248  292081 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1217 00:43:02.973174  292081 out.go:179] * Done! kubectl is now configured to use "newest-cni-653717" cluster and "default" namespace by default
	W1217 00:43:01.302894  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	W1217 00:43:03.304304  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 17 00:42:52 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:52.848482181Z" level=info msg="Starting container: 90b543df8b84e2821a3c7a3fc6a65a540323c991a63fa37f2f71942cba7db328" id=e223a6f2-c6ee-4403-889e-7fadffa79835 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:42:52 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:52.851977099Z" level=info msg="Started container" PID=1847 containerID=90b543df8b84e2821a3c7a3fc6a65a540323c991a63fa37f2f71942cba7db328 description=kube-system/coredns-66bc5c9577-v76f4/coredns id=e223a6f2-c6ee-4403-889e-7fadffa79835 name=/runtime.v1.RuntimeService/StartContainer sandboxID=23aefc1da333747bab2d89d48b1c7870b335418a154d6d66713015e36daa373e
	Dec 17 00:42:56 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:56.259242399Z" level=info msg="Running pod sandbox: default/busybox/POD" id=34ebce81-f439-4714-b9b3-35c4ba478665 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:42:56 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:56.25932782Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:56 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:56.265348998Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:28e6e4cdacbcb43d0d8bf5d67755910dc908cecec7f3cd56d187a19b966282f4 UID:48df1bad-87f8-4fbe-aa86-221abf160bdd NetNS:/var/run/netns/1d0bd217-6d99-45af-91af-42c016364d37 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008af38}] Aliases:map[]}"
	Dec 17 00:42:56 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:56.265389583Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 17 00:42:56 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:56.281799843Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:28e6e4cdacbcb43d0d8bf5d67755910dc908cecec7f3cd56d187a19b966282f4 UID:48df1bad-87f8-4fbe-aa86-221abf160bdd NetNS:/var/run/netns/1d0bd217-6d99-45af-91af-42c016364d37 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008af38}] Aliases:map[]}"
	Dec 17 00:42:56 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:56.282322612Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 17 00:42:56 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:56.28449173Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 00:42:56 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:56.285629555Z" level=info msg="Ran pod sandbox 28e6e4cdacbcb43d0d8bf5d67755910dc908cecec7f3cd56d187a19b966282f4 with infra container: default/busybox/POD" id=34ebce81-f439-4714-b9b3-35c4ba478665 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:42:56 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:56.287087769Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cbf2e2ac-bbce-44f4-89c7-780d445cbfb0 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:42:56 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:56.287227741Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=cbf2e2ac-bbce-44f4-89c7-780d445cbfb0 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:42:56 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:56.2872798Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=cbf2e2ac-bbce-44f4-89c7-780d445cbfb0 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:42:56 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:56.288077774Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1b1cf699-4f5e-4ec8-af3f-78b2cc1ce351 name=/runtime.v1.ImageService/PullImage
	Dec 17 00:42:56 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:56.289933785Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 17 00:42:56 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:56.893521303Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=1b1cf699-4f5e-4ec8-af3f-78b2cc1ce351 name=/runtime.v1.ImageService/PullImage
	Dec 17 00:42:56 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:56.894227337Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3036cdc5-932d-4fe1-9c9d-9441972eba14 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:42:56 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:56.895540975Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c80cc87f-a1db-4795-9019-62ca4f7c0c24 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:42:56 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:56.898704992Z" level=info msg="Creating container: default/busybox/busybox" id=d14aa19d-ff14-429e-96cb-deed03691004 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:42:56 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:56.89882932Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:56 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:56.902095861Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:56 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:56.902475872Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:42:56 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:56.934170938Z" level=info msg="Created container e007f9a780a5303fd707a718f724c4ebec2cd3a09845f96d53979d58b8702d6d: default/busybox/busybox" id=d14aa19d-ff14-429e-96cb-deed03691004 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:42:56 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:56.934779311Z" level=info msg="Starting container: e007f9a780a5303fd707a718f724c4ebec2cd3a09845f96d53979d58b8702d6d" id=62ada9a5-72ca-43cd-8327-af17846ae23e name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:42:56 default-k8s-diff-port-414413 crio[776]: time="2025-12-17T00:42:56.936546474Z" level=info msg="Started container" PID=1922 containerID=e007f9a780a5303fd707a718f724c4ebec2cd3a09845f96d53979d58b8702d6d description=default/busybox/busybox id=62ada9a5-72ca-43cd-8327-af17846ae23e name=/runtime.v1.RuntimeService/StartContainer sandboxID=28e6e4cdacbcb43d0d8bf5d67755910dc908cecec7f3cd56d187a19b966282f4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	e007f9a780a53       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   28e6e4cdacbcb       busybox                                                default
	90b543df8b84e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   23aefc1da3337       coredns-66bc5c9577-v76f4                               kube-system
	3573813668a54       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   3713543fe6b2e       storage-provisioner                                    kube-system
	3fc9f7a2e971e       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      23 seconds ago      Running             kube-proxy                0                   a19b21af7e8e7       kube-proxy-prlkw                                       kube-system
	0e53cf197175a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   433b6b49e118b       kindnet-hxhbf                                          kube-system
	a648375089384       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      34 seconds ago      Running             etcd                      0                   0529ee5781670       etcd-default-k8s-diff-port-414413                      kube-system
	17b5eb9b850c8       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      34 seconds ago      Running             kube-controller-manager   0                   c77009443723b       kube-controller-manager-default-k8s-diff-port-414413   kube-system
	e906219d55937       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      34 seconds ago      Running             kube-scheduler            0                   e35e572e69f86       kube-scheduler-default-k8s-diff-port-414413            kube-system
	1a6708cb7ef8b       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      34 seconds ago      Running             kube-apiserver            0                   d96ecd00401c5       kube-apiserver-default-k8s-diff-port-414413            kube-system
	
	
	==> coredns [90b543df8b84e2821a3c7a3fc6a65a540323c991a63fa37f2f71942cba7db328] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58977 - 9019 "HINFO IN 1651700759320242747.6041536594119599763. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.073471136s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-414413
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-414413
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=default-k8s-diff-port-414413
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_42_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:42:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-414413
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:42:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:42:52 +0000   Wed, 17 Dec 2025 00:42:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:42:52 +0000   Wed, 17 Dec 2025 00:42:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:42:52 +0000   Wed, 17 Dec 2025 00:42:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 00:42:52 +0000   Wed, 17 Dec 2025 00:42:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-414413
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                30e488d3-49b2-4dae-91a3-bdf1e8cb0774
	  Boot ID:                    0e9cedc6-c46e-4354-b3d2-9272a8b33ae5
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-v76f4                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-414413                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-hxhbf                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-default-k8s-diff-port-414413             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-414413    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-prlkw                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-default-k8s-diff-port-414413             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node default-k8s-diff-port-414413 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node default-k8s-diff-port-414413 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node default-k8s-diff-port-414413 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node default-k8s-diff-port-414413 event: Registered Node default-k8s-diff-port-414413 in Controller
	  Normal  NodeReady                13s   kubelet          Node default-k8s-diff-port-414413 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.089382] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024236] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.864694] kauditd_printk_skb: 47 callbacks suppressed
	[Dec17 00:07] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.006904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +2.048755] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +4.030595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +8.447143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[ +16.382404] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000015] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[Dec17 00:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	
	
	==> etcd [a6483750893845343d544cca09f5efed006a0aecf2ab652773f84fdebf5f0677] <==
	{"level":"warn","ts":"2025-12-17T00:42:32.483138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:32.490974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:32.498856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:32.507646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:32.514830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:32.522537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:32.538955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:32.545935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:32.552937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:32.560342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:32.567199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:32.581197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:32.584652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:32.592316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:32.599110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:32.651918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53298","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T00:42:40.869827Z","caller":"traceutil/trace.go:172","msg":"trace[160867631] transaction","detail":"{read_only:false; response_revision:301; number_of_response:1; }","duration":"131.451305ms","start":"2025-12-17T00:42:40.738340Z","end":"2025-12-17T00:42:40.869792Z","steps":["trace[160867631] 'process raft request'  (duration: 131.400825ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:42:40.869878Z","caller":"traceutil/trace.go:172","msg":"trace[134688356] transaction","detail":"{read_only:false; response_revision:300; number_of_response:1; }","duration":"131.917478ms","start":"2025-12-17T00:42:40.737940Z","end":"2025-12-17T00:42:40.869857Z","steps":["trace[134688356] 'process raft request'  (duration: 131.657253ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T00:42:40.985398Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.484143ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" limit:1 ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2025-12-17T00:42:40.985481Z","caller":"traceutil/trace.go:172","msg":"trace[406119043] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:1; response_revision:301; }","duration":"106.580001ms","start":"2025-12-17T00:42:40.878889Z","end":"2025-12-17T00:42:40.985469Z","steps":["trace[406119043] 'agreement among raft nodes before linearized reading'  (duration: 73.522838ms)","trace[406119043] 'range keys from in-memory index tree'  (duration: 32.86219ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T00:42:40.985489Z","caller":"traceutil/trace.go:172","msg":"trace[207880575] transaction","detail":"{read_only:false; response_revision:302; number_of_response:1; }","duration":"188.698734ms","start":"2025-12-17T00:42:40.796763Z","end":"2025-12-17T00:42:40.985461Z","steps":["trace[207880575] 'process raft request'  (duration: 155.710847ms)","trace[207880575] 'compare'  (duration: 32.789963ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T00:42:40.985572Z","caller":"traceutil/trace.go:172","msg":"trace[74448784] transaction","detail":"{read_only:false; response_revision:306; number_of_response:1; }","duration":"108.929321ms","start":"2025-12-17T00:42:40.876630Z","end":"2025-12-17T00:42:40.985559Z","steps":["trace[74448784] 'process raft request'  (duration: 108.878084ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:42:40.985742Z","caller":"traceutil/trace.go:172","msg":"trace[1552236529] transaction","detail":"{read_only:false; response_revision:303; number_of_response:1; }","duration":"110.119987ms","start":"2025-12-17T00:42:40.875597Z","end":"2025-12-17T00:42:40.985717Z","steps":["trace[1552236529] 'process raft request'  (duration: 109.791129ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:42:40.985806Z","caller":"traceutil/trace.go:172","msg":"trace[942732687] transaction","detail":"{read_only:false; response_revision:305; number_of_response:1; }","duration":"110.097351ms","start":"2025-12-17T00:42:40.875699Z","end":"2025-12-17T00:42:40.985796Z","steps":["trace[942732687] 'process raft request'  (duration: 109.768127ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:42:40.985866Z","caller":"traceutil/trace.go:172","msg":"trace[1250855338] transaction","detail":"{read_only:false; response_revision:304; number_of_response:1; }","duration":"110.258025ms","start":"2025-12-17T00:42:40.875597Z","end":"2025-12-17T00:42:40.985855Z","steps":["trace[1250855338] 'process raft request'  (duration: 109.835799ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:43:05 up  1:25,  0 user,  load average: 3.77, 2.88, 1.94
	Linux default-k8s-diff-port-414413 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0e53cf197175aa7d5bd741998a969fc9eb41f70f31a494e4fe6cb33c11aa7bdd] <==
	I1217 00:42:41.776074       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 00:42:41.776643       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1217 00:42:41.776911       1 main.go:148] setting mtu 1500 for CNI 
	I1217 00:42:41.776982       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 00:42:41.777065       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T00:42:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 00:42:41.982552       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 00:42:41.982615       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 00:42:41.982632       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 00:42:42.083501       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 00:42:42.483256       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 00:42:42.483291       1 metrics.go:72] Registering metrics
	I1217 00:42:42.483457       1 controller.go:711] "Syncing nftables rules"
	I1217 00:42:51.985886       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 00:42:51.986016       1 main.go:301] handling current node
	I1217 00:43:01.985159       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 00:43:01.985205       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1a6708cb7ef8b83654cdfa68a91c1fbadbd2095028cf1f408905fffe994a7cf7] <==
	I1217 00:42:33.134363       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 00:42:33.134370       1 cache.go:39] Caches are synced for autoregister controller
	I1217 00:42:33.139511       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1217 00:42:33.141060       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:42:33.145945       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:42:33.146740       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 00:42:33.331265       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 00:42:34.037162       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1217 00:42:34.042585       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1217 00:42:34.042605       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 00:42:34.529988       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 00:42:34.571967       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 00:42:34.642587       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 00:42:34.648673       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1217 00:42:34.649854       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 00:42:34.654170       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 00:42:35.082442       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 00:42:35.687071       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 00:42:35.697065       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 00:42:35.707295       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 00:42:40.874967       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1217 00:42:41.088597       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:42:41.092149       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:42:41.136469       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1217 00:43:04.049195       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:38396: use of closed network connection
	
	
	==> kube-controller-manager [17b5eb9b850c86fe26c34c3618f12511de18886e45d0c3291fe0bcdde30ca9a1] <==
	I1217 00:42:40.053714       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 00:42:40.053724       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 00:42:40.061837       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 00:42:40.066781       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 00:42:40.067330       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 00:42:40.067456       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 00:42:40.067573       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 00:42:40.073794       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 00:42:40.081447       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1217 00:42:40.081474       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 00:42:40.081487       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 00:42:40.081516       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 00:42:40.081671       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1217 00:42:40.081693       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 00:42:40.081853       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1217 00:42:40.081866       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 00:42:40.081958       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 00:42:40.082183       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 00:42:40.082305       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 00:42:40.082446       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 00:42:40.083780       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1217 00:42:40.087044       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1217 00:42:40.089195       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 00:42:40.111359       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 00:42:55.019452       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3fc9f7a2e971e8ad0419407a393b00a76ef039f123a347f3733b72d10ccf30c9] <==
	I1217 00:42:41.539909       1 server_linux.go:53] "Using iptables proxy"
	I1217 00:42:41.607758       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 00:42:41.708038       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 00:42:41.708158       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1217 00:42:41.708301       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 00:42:41.734951       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 00:42:41.735048       1 server_linux.go:132] "Using iptables Proxier"
	I1217 00:42:41.742842       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 00:42:41.743630       1 server.go:527] "Version info" version="v1.34.2"
	I1217 00:42:41.743706       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:42:41.747142       1 config.go:200] "Starting service config controller"
	I1217 00:42:41.747204       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 00:42:41.747247       1 config.go:106] "Starting endpoint slice config controller"
	I1217 00:42:41.747272       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 00:42:41.747304       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 00:42:41.748720       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 00:42:41.748830       1 config.go:309] "Starting node config controller"
	I1217 00:42:41.748864       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 00:42:41.748890       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 00:42:41.847412       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 00:42:41.847412       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 00:42:41.849439       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e906219d559376d43840a2968024c41d30450e1cfd48a1dc265883a58374fffa] <==
	E1217 00:42:33.095501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 00:42:33.095022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 00:42:33.095540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 00:42:33.095604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 00:42:33.095659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 00:42:33.095805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 00:42:33.095904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 00:42:33.095901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 00:42:33.095934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 00:42:33.095964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 00:42:33.096905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 00:42:33.096922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 00:42:33.096922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 00:42:33.926913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 00:42:33.932988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 00:42:33.938961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 00:42:34.007875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 00:42:34.045158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 00:42:34.097584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 00:42:34.139756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 00:42:34.155144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1217 00:42:34.180277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 00:42:34.223662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 00:42:34.244743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1217 00:42:37.392677       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 00:42:36 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:36.651016    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-414413" podStartSLOduration=1.6509765330000001 podStartE2EDuration="1.650976533s" podCreationTimestamp="2025-12-17 00:42:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:42:36.650733586 +0000 UTC m=+1.176676134" watchObservedRunningTime="2025-12-17 00:42:36.650976533 +0000 UTC m=+1.176919087"
	Dec 17 00:42:36 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:36.668069    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-414413" podStartSLOduration=1.6680471799999999 podStartE2EDuration="1.66804718s" podCreationTimestamp="2025-12-17 00:42:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:42:36.66109039 +0000 UTC m=+1.187032944" watchObservedRunningTime="2025-12-17 00:42:36.66804718 +0000 UTC m=+1.193989730"
	Dec 17 00:42:36 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:36.668188    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-414413" podStartSLOduration=1.668181418 podStartE2EDuration="1.668181418s" podCreationTimestamp="2025-12-17 00:42:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:42:36.668043834 +0000 UTC m=+1.193986388" watchObservedRunningTime="2025-12-17 00:42:36.668181418 +0000 UTC m=+1.194123972"
	Dec 17 00:42:36 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:36.688158    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-414413" podStartSLOduration=1.688125063 podStartE2EDuration="1.688125063s" podCreationTimestamp="2025-12-17 00:42:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:42:36.679175758 +0000 UTC m=+1.205118329" watchObservedRunningTime="2025-12-17 00:42:36.688125063 +0000 UTC m=+1.214067617"
	Dec 17 00:42:40 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:40.126781    1318 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 17 00:42:40 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:40.127652    1318 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 17 00:42:41 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:41.098725    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv6c7\" (UniqueName: \"kubernetes.io/projected/9a4571d0-7682-4838-aeb3-ccb4480157b8-kube-api-access-xv6c7\") pod \"kube-proxy-prlkw\" (UID: \"9a4571d0-7682-4838-aeb3-ccb4480157b8\") " pod="kube-system/kube-proxy-prlkw"
	Dec 17 00:42:41 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:41.098784    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9a4571d0-7682-4838-aeb3-ccb4480157b8-kube-proxy\") pod \"kube-proxy-prlkw\" (UID: \"9a4571d0-7682-4838-aeb3-ccb4480157b8\") " pod="kube-system/kube-proxy-prlkw"
	Dec 17 00:42:41 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:41.098804    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a4c2ed1b-ad48-484e-b779-4b93f3a72d0b-cni-cfg\") pod \"kindnet-hxhbf\" (UID: \"a4c2ed1b-ad48-484e-b779-4b93f3a72d0b\") " pod="kube-system/kindnet-hxhbf"
	Dec 17 00:42:41 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:41.098817    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4c2ed1b-ad48-484e-b779-4b93f3a72d0b-lib-modules\") pod \"kindnet-hxhbf\" (UID: \"a4c2ed1b-ad48-484e-b779-4b93f3a72d0b\") " pod="kube-system/kindnet-hxhbf"
	Dec 17 00:42:41 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:41.098872    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmhgz\" (UniqueName: \"kubernetes.io/projected/a4c2ed1b-ad48-484e-b779-4b93f3a72d0b-kube-api-access-xmhgz\") pod \"kindnet-hxhbf\" (UID: \"a4c2ed1b-ad48-484e-b779-4b93f3a72d0b\") " pod="kube-system/kindnet-hxhbf"
	Dec 17 00:42:41 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:41.098945    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a4571d0-7682-4838-aeb3-ccb4480157b8-xtables-lock\") pod \"kube-proxy-prlkw\" (UID: \"9a4571d0-7682-4838-aeb3-ccb4480157b8\") " pod="kube-system/kube-proxy-prlkw"
	Dec 17 00:42:41 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:41.099002    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a4571d0-7682-4838-aeb3-ccb4480157b8-lib-modules\") pod \"kube-proxy-prlkw\" (UID: \"9a4571d0-7682-4838-aeb3-ccb4480157b8\") " pod="kube-system/kube-proxy-prlkw"
	Dec 17 00:42:41 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:41.099028    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4c2ed1b-ad48-484e-b779-4b93f3a72d0b-xtables-lock\") pod \"kindnet-hxhbf\" (UID: \"a4c2ed1b-ad48-484e-b779-4b93f3a72d0b\") " pod="kube-system/kindnet-hxhbf"
	Dec 17 00:42:41 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:41.651713    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-prlkw" podStartSLOduration=1.651688316 podStartE2EDuration="1.651688316s" podCreationTimestamp="2025-12-17 00:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:42:41.640330075 +0000 UTC m=+6.166272629" watchObservedRunningTime="2025-12-17 00:42:41.651688316 +0000 UTC m=+6.177630873"
	Dec 17 00:42:42 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:42.371487    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-hxhbf" podStartSLOduration=2.371461277 podStartE2EDuration="2.371461277s" podCreationTimestamp="2025-12-17 00:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:42:41.651448513 +0000 UTC m=+6.177391068" watchObservedRunningTime="2025-12-17 00:42:42.371461277 +0000 UTC m=+6.897403831"
	Dec 17 00:42:52 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:52.414903    1318 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 17 00:42:52 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:52.491311    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0405b749-23a9-4449-90ac-59daf539647b-tmp\") pod \"storage-provisioner\" (UID: \"0405b749-23a9-4449-90ac-59daf539647b\") " pod="kube-system/storage-provisioner"
	Dec 17 00:42:52 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:52.492582    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1370bcd6-f828-4ed0-af58-d2d87c7044bd-config-volume\") pod \"coredns-66bc5c9577-v76f4\" (UID: \"1370bcd6-f828-4ed0-af58-d2d87c7044bd\") " pod="kube-system/coredns-66bc5c9577-v76f4"
	Dec 17 00:42:52 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:52.492637    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c6hs\" (UniqueName: \"kubernetes.io/projected/1370bcd6-f828-4ed0-af58-d2d87c7044bd-kube-api-access-8c6hs\") pod \"coredns-66bc5c9577-v76f4\" (UID: \"1370bcd6-f828-4ed0-af58-d2d87c7044bd\") " pod="kube-system/coredns-66bc5c9577-v76f4"
	Dec 17 00:42:52 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:52.492662    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwg4t\" (UniqueName: \"kubernetes.io/projected/0405b749-23a9-4449-90ac-59daf539647b-kube-api-access-gwg4t\") pod \"storage-provisioner\" (UID: \"0405b749-23a9-4449-90ac-59daf539647b\") " pod="kube-system/storage-provisioner"
	Dec 17 00:42:53 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:53.672389    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.672366406 podStartE2EDuration="12.672366406s" podCreationTimestamp="2025-12-17 00:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:42:53.671880271 +0000 UTC m=+18.197822849" watchObservedRunningTime="2025-12-17 00:42:53.672366406 +0000 UTC m=+18.198308960"
	Dec 17 00:42:55 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:55.951720    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-v76f4" podStartSLOduration=14.951689161000001 podStartE2EDuration="14.951689161s" podCreationTimestamp="2025-12-17 00:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:42:53.686361528 +0000 UTC m=+18.212304080" watchObservedRunningTime="2025-12-17 00:42:55.951689161 +0000 UTC m=+20.477631720"
	Dec 17 00:42:56 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:56.017462    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmr69\" (UniqueName: \"kubernetes.io/projected/48df1bad-87f8-4fbe-aa86-221abf160bdd-kube-api-access-bmr69\") pod \"busybox\" (UID: \"48df1bad-87f8-4fbe-aa86-221abf160bdd\") " pod="default/busybox"
	Dec 17 00:42:57 default-k8s-diff-port-414413 kubelet[1318]: I1217 00:42:57.682290    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.07492223 podStartE2EDuration="2.682268799s" podCreationTimestamp="2025-12-17 00:42:55 +0000 UTC" firstStartedPulling="2025-12-17 00:42:56.287611644 +0000 UTC m=+20.813554183" lastFinishedPulling="2025-12-17 00:42:56.894958211 +0000 UTC m=+21.420900752" observedRunningTime="2025-12-17 00:42:57.682057636 +0000 UTC m=+22.208000188" watchObservedRunningTime="2025-12-17 00:42:57.682268799 +0000 UTC m=+22.208211355"
	
	
	==> storage-provisioner [3573813668a543ef514b8fb4436bab179c02b09504ef19c5e1e9ea79d3a5551f] <==
	I1217 00:42:52.861069       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 00:42:52.872675       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 00:42:52.872910       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 00:42:52.876132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:52.881856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 00:42:52.882567       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 00:42:52.882745       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-414413_3c0996b9-3643-447c-bf48-e13d0c1af663!
	I1217 00:42:52.883035       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9cb29e54-a67b-4f6f-a2d9-d357efab670a", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-414413_3c0996b9-3643-447c-bf48-e13d0c1af663 became leader
	W1217 00:42:52.890110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:52.898573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 00:42:52.982955       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-414413_3c0996b9-3643-447c-bf48-e13d0c1af663!
	W1217 00:42:54.901420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:54.905974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:56.909348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:56.912584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:58.915850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:58.921030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:00.924367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:00.930215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:02.933358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:02.937228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:04.941151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:04.945133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-414413 -n default-k8s-diff-port-414413
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-414413 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-653717 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-653717 --alsologtostderr -v=1: exit status 80 (2.399860362s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-653717 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:43:18.449811  303650 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:43:18.450089  303650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:43:18.450098  303650 out.go:374] Setting ErrFile to fd 2...
	I1217 00:43:18.450103  303650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:43:18.450306  303650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:43:18.450521  303650 out.go:368] Setting JSON to false
	I1217 00:43:18.450538  303650 mustload.go:66] Loading cluster: newest-cni-653717
	I1217 00:43:18.450872  303650 config.go:182] Loaded profile config "newest-cni-653717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:43:18.451264  303650 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:43:18.469162  303650 host.go:66] Checking if "newest-cni-653717" exists ...
	I1217 00:43:18.469406  303650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:43:18.528635  303650 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:82 OomKillDisable:false NGoroutines:88 SystemTime:2025-12-17 00:43:18.517616573 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:43:18.529555  303650 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-653717 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 00:43:18.531301  303650 out.go:179] * Pausing node newest-cni-653717 ... 
	I1217 00:43:18.533123  303650 host.go:66] Checking if "newest-cni-653717" exists ...
	I1217 00:43:18.533451  303650 ssh_runner.go:195] Run: systemctl --version
	I1217 00:43:18.533508  303650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:18.554262  303650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:43:18.660865  303650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:43:18.672792  303650 pause.go:52] kubelet running: true
	I1217 00:43:18.672863  303650 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 00:43:18.795790  303650 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 00:43:18.795881  303650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 00:43:18.870272  303650 cri.go:89] found id: "09db06ede95c5fa90e1d8add618ad4ec6e4856dc88d34335e62d7d1b21d156f1"
	I1217 00:43:18.870298  303650 cri.go:89] found id: "053bf92e9a5f9a52b4b6ab67762abf5f59e6048f064f1e157e90fe6500e59fb1"
	I1217 00:43:18.870305  303650 cri.go:89] found id: "de646b7d108062a9c689d337e628f48d29162ce37e015f70f4dfde0b63fd7fe1"
	I1217 00:43:18.870311  303650 cri.go:89] found id: "f155d5d25fa50fc257ecd4b7da29e9d818c2cfa5f80f0f2c2dfa23e5b3025e69"
	I1217 00:43:18.870316  303650 cri.go:89] found id: "097160c44a70ef0edf501027a68baa41f06ea618f8b735835c64eb3b3c78f426"
	I1217 00:43:18.870322  303650 cri.go:89] found id: "608d066efbe101d85b2f7a5a7b16d1ad974b66b117a0796b4196b6e3e5f4c30a"
	I1217 00:43:18.870327  303650 cri.go:89] found id: ""
	I1217 00:43:18.870387  303650 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:43:18.881559  303650 retry.go:31] will retry after 324.137105ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:43:18Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:43:19.206022  303650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:43:19.219401  303650 pause.go:52] kubelet running: false
	I1217 00:43:19.219462  303650 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 00:43:19.340182  303650 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 00:43:19.340256  303650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 00:43:19.420869  303650 cri.go:89] found id: "09db06ede95c5fa90e1d8add618ad4ec6e4856dc88d34335e62d7d1b21d156f1"
	I1217 00:43:19.420906  303650 cri.go:89] found id: "053bf92e9a5f9a52b4b6ab67762abf5f59e6048f064f1e157e90fe6500e59fb1"
	I1217 00:43:19.420913  303650 cri.go:89] found id: "de646b7d108062a9c689d337e628f48d29162ce37e015f70f4dfde0b63fd7fe1"
	I1217 00:43:19.420917  303650 cri.go:89] found id: "f155d5d25fa50fc257ecd4b7da29e9d818c2cfa5f80f0f2c2dfa23e5b3025e69"
	I1217 00:43:19.420921  303650 cri.go:89] found id: "097160c44a70ef0edf501027a68baa41f06ea618f8b735835c64eb3b3c78f426"
	I1217 00:43:19.420925  303650 cri.go:89] found id: "608d066efbe101d85b2f7a5a7b16d1ad974b66b117a0796b4196b6e3e5f4c30a"
	I1217 00:43:19.420929  303650 cri.go:89] found id: ""
	I1217 00:43:19.420978  303650 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:43:19.433268  303650 retry.go:31] will retry after 219.554266ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:43:19Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:43:19.653731  303650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:43:19.666460  303650 pause.go:52] kubelet running: false
	I1217 00:43:19.666533  303650 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 00:43:19.805293  303650 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 00:43:19.805378  303650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 00:43:19.875438  303650 cri.go:89] found id: "09db06ede95c5fa90e1d8add618ad4ec6e4856dc88d34335e62d7d1b21d156f1"
	I1217 00:43:19.875461  303650 cri.go:89] found id: "053bf92e9a5f9a52b4b6ab67762abf5f59e6048f064f1e157e90fe6500e59fb1"
	I1217 00:43:19.875467  303650 cri.go:89] found id: "de646b7d108062a9c689d337e628f48d29162ce37e015f70f4dfde0b63fd7fe1"
	I1217 00:43:19.875473  303650 cri.go:89] found id: "f155d5d25fa50fc257ecd4b7da29e9d818c2cfa5f80f0f2c2dfa23e5b3025e69"
	I1217 00:43:19.875478  303650 cri.go:89] found id: "097160c44a70ef0edf501027a68baa41f06ea618f8b735835c64eb3b3c78f426"
	I1217 00:43:19.875491  303650 cri.go:89] found id: "608d066efbe101d85b2f7a5a7b16d1ad974b66b117a0796b4196b6e3e5f4c30a"
	I1217 00:43:19.875496  303650 cri.go:89] found id: ""
	I1217 00:43:19.875536  303650 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:43:19.886651  303650 retry.go:31] will retry after 574.334184ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:43:19Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:43:20.461105  303650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:43:20.477223  303650 pause.go:52] kubelet running: false
	I1217 00:43:20.477278  303650 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 00:43:20.660418  303650 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 00:43:20.660505  303650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 00:43:20.746061  303650 cri.go:89] found id: "09db06ede95c5fa90e1d8add618ad4ec6e4856dc88d34335e62d7d1b21d156f1"
	I1217 00:43:20.746089  303650 cri.go:89] found id: "053bf92e9a5f9a52b4b6ab67762abf5f59e6048f064f1e157e90fe6500e59fb1"
	I1217 00:43:20.746096  303650 cri.go:89] found id: "de646b7d108062a9c689d337e628f48d29162ce37e015f70f4dfde0b63fd7fe1"
	I1217 00:43:20.746101  303650 cri.go:89] found id: "f155d5d25fa50fc257ecd4b7da29e9d818c2cfa5f80f0f2c2dfa23e5b3025e69"
	I1217 00:43:20.746105  303650 cri.go:89] found id: "097160c44a70ef0edf501027a68baa41f06ea618f8b735835c64eb3b3c78f426"
	I1217 00:43:20.746110  303650 cri.go:89] found id: "608d066efbe101d85b2f7a5a7b16d1ad974b66b117a0796b4196b6e3e5f4c30a"
	I1217 00:43:20.746115  303650 cri.go:89] found id: ""
	I1217 00:43:20.746163  303650 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:43:20.768117  303650 out.go:203] 
	W1217 00:43:20.769299  303650 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:43:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:43:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:43:20.769321  303650 out.go:285] * 
	* 
	W1217 00:43:20.773811  303650 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:43:20.775530  303650 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-653717 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-653717
helpers_test.go:244: (dbg) docker inspect newest-cni-653717:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "beff396f1ecf0ad7988c26d13bbede7e2b58ac17c04e57fcb9bdf8cdfddcf41e",
	        "Created": "2025-12-17T00:42:44.576413898Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300623,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:43:08.297869312Z",
	            "FinishedAt": "2025-12-17T00:43:07.467274633Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/beff396f1ecf0ad7988c26d13bbede7e2b58ac17c04e57fcb9bdf8cdfddcf41e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/beff396f1ecf0ad7988c26d13bbede7e2b58ac17c04e57fcb9bdf8cdfddcf41e/hostname",
	        "HostsPath": "/var/lib/docker/containers/beff396f1ecf0ad7988c26d13bbede7e2b58ac17c04e57fcb9bdf8cdfddcf41e/hosts",
	        "LogPath": "/var/lib/docker/containers/beff396f1ecf0ad7988c26d13bbede7e2b58ac17c04e57fcb9bdf8cdfddcf41e/beff396f1ecf0ad7988c26d13bbede7e2b58ac17c04e57fcb9bdf8cdfddcf41e-json.log",
	        "Name": "/newest-cni-653717",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-653717:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-653717",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "beff396f1ecf0ad7988c26d13bbede7e2b58ac17c04e57fcb9bdf8cdfddcf41e",
	                "LowerDir": "/var/lib/docker/overlay2/b3d705a839526a196f0f1ae4bd0a8c2a9760f4aba6266e16997c71c4dc1dfa7d-init/diff:/var/lib/docker/overlay2/594b812fd6d8db89dab322ea9e00d43dd555e9709fb5e6953e3873cce717392c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3d705a839526a196f0f1ae4bd0a8c2a9760f4aba6266e16997c71c4dc1dfa7d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3d705a839526a196f0f1ae4bd0a8c2a9760f4aba6266e16997c71c4dc1dfa7d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3d705a839526a196f0f1ae4bd0a8c2a9760f4aba6266e16997c71c4dc1dfa7d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-653717",
	                "Source": "/var/lib/docker/volumes/newest-cni-653717/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-653717",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-653717",
	                "name.minikube.sigs.k8s.io": "newest-cni-653717",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e847a821e2f9d0ab2b9d32264e8759b233987b34c2f1987177c3de5252eca881",
	            "SandboxKey": "/var/run/docker/netns/e847a821e2f9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-653717": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "978c2526e91c5a0b699851fa3eca8542bfa74ada0d698e43a470cd47adc72c7d",
	                    "EndpointID": "b4d8c4ca1cbe7e8db0efb55d0b4cffc0a887248f3855a8e14e5f5734e333aa5c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "b6:ed:7f:58:f7:ab",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-653717",
	                        "beff396f1ecf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-653717 -n newest-cni-653717
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-653717 -n newest-cni-653717: exit status 2 (368.795352ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-653717 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-653717 logs -n 25: (1.143450581s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-803959    │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ delete  │ -p kubernetes-upgrade-803959                                                                                                                                                                                                                         │ kubernetes-upgrade-803959    │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ delete  │ -p disable-driver-mounts-827138                                                                                                                                                                                                                      │ disable-driver-mounts-827138 │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p default-k8s-diff-port-414413 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ addons  │ enable metrics-server -p no-preload-864613 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ stop    │ -p no-preload-864613 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ image   │ old-k8s-version-742860 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ pause   │ -p old-k8s-version-742860 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-864613 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p no-preload-864613 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ delete  │ -p old-k8s-version-742860                                                                                                                                                                                                                            │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ delete  │ -p old-k8s-version-742860                                                                                                                                                                                                                            │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p newest-cni-653717 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-153232 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ stop    │ -p embed-certs-153232 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable metrics-server -p newest-cni-653717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-414413 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ stop    │ -p newest-cni-653717 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ stop    │ -p default-k8s-diff-port-414413 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-653717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p newest-cni-653717 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-153232 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p embed-certs-153232 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ image   │ newest-cni-653717 image list --format=json                                                                                                                                                                                                           │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ pause   │ -p newest-cni-653717 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:43:13
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:43:13.020242  301437 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:43:13.020476  301437 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:43:13.020486  301437 out.go:374] Setting ErrFile to fd 2...
	I1217 00:43:13.020490  301437 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:43:13.020753  301437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:43:13.021247  301437 out.go:368] Setting JSON to false
	I1217 00:43:13.022383  301437 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5143,"bootTime":1765927050,"procs":303,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:43:13.022433  301437 start.go:143] virtualization: kvm guest
	I1217 00:43:13.024226  301437 out.go:179] * [embed-certs-153232] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:43:13.025825  301437 notify.go:221] Checking for updates...
	I1217 00:43:13.025832  301437 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:43:13.027383  301437 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:43:13.028603  301437 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:43:13.029712  301437 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:43:13.030758  301437 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:43:13.031785  301437 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:43:08.274769  300419 out.go:252] * Restarting existing docker container for "newest-cni-653717" ...
	I1217 00:43:08.274864  300419 cli_runner.go:164] Run: docker start newest-cni-653717
	I1217 00:43:08.530219  300419 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:43:08.549238  300419 kic.go:430] container "newest-cni-653717" state is running.
	I1217 00:43:08.549711  300419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-653717
	I1217 00:43:08.567874  300419 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/config.json ...
	I1217 00:43:08.568184  300419 machine.go:94] provisionDockerMachine start ...
	I1217 00:43:08.568267  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:08.585795  300419 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:08.586130  300419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1217 00:43:08.586157  300419 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:43:08.586687  300419 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34000->127.0.0.1:33093: read: connection reset by peer
	I1217 00:43:11.711445  300419 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-653717
	
	I1217 00:43:11.711472  300419 ubuntu.go:182] provisioning hostname "newest-cni-653717"
	I1217 00:43:11.711530  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:11.729003  300419 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:11.729241  300419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1217 00:43:11.729259  300419 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-653717 && echo "newest-cni-653717" | sudo tee /etc/hostname
	I1217 00:43:11.862934  300419 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-653717
	
	I1217 00:43:11.863058  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:11.880935  300419 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:11.881192  300419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1217 00:43:11.881226  300419 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-653717' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-653717/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-653717' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:43:12.008758  300419 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:43:12.008782  300419 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:43:12.008850  300419 ubuntu.go:190] setting up certificates
	I1217 00:43:12.008862  300419 provision.go:84] configureAuth start
	I1217 00:43:12.008908  300419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-653717
	I1217 00:43:12.027799  300419 provision.go:143] copyHostCerts
	I1217 00:43:12.027887  300419 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:43:12.027913  300419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:43:12.027987  300419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:43:12.028120  300419 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:43:12.028131  300419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:43:12.028186  300419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:43:12.028265  300419 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:43:12.028275  300419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:43:12.028312  300419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:43:12.028386  300419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.newest-cni-653717 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-653717]
	I1217 00:43:12.081830  300419 provision.go:177] copyRemoteCerts
	I1217 00:43:12.081888  300419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:43:12.081918  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:12.099779  300419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:43:12.190678  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:43:12.207513  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:43:12.223877  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:43:12.241141  300419 provision.go:87] duration metric: took 232.260945ms to configureAuth
	I1217 00:43:12.241167  300419 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:43:12.241341  300419 config.go:182] Loaded profile config "newest-cni-653717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:43:12.241425  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:12.259586  300419 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:12.259859  300419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1217 00:43:12.259887  300419 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:43:12.536278  300419 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:43:12.536308  300419 machine.go:97] duration metric: took 3.968103953s to provisionDockerMachine
	I1217 00:43:12.536323  300419 start.go:293] postStartSetup for "newest-cni-653717" (driver="docker")
	I1217 00:43:12.536340  300419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:43:12.536410  300419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:43:12.536455  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:12.555094  300419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:43:12.647080  300419 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:43:12.650723  300419 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:43:12.650757  300419 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:43:12.650773  300419 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:43:12.650825  300419 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:43:12.650946  300419 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:43:12.651152  300419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:43:12.675027  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:43:12.699530  300419 start.go:296] duration metric: took 163.192934ms for postStartSetup
	I1217 00:43:12.699625  300419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:43:12.699669  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:12.718970  300419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:43:12.813385  300419 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:43:12.818537  300419 fix.go:56] duration metric: took 4.562240336s for fixHost
	I1217 00:43:12.818565  300419 start.go:83] releasing machines lock for "newest-cni-653717", held for 4.562291137s
	I1217 00:43:12.818630  300419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-653717
	I1217 00:43:12.839076  300419 ssh_runner.go:195] Run: cat /version.json
	I1217 00:43:12.839140  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:12.839157  300419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:43:12.839236  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:12.858954  300419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:43:12.859795  300419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:43:13.011144  300419 ssh_runner.go:195] Run: systemctl --version
	I1217 00:43:13.018259  300419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:43:13.054673  300419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:43:13.059674  300419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:43:13.059734  300419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:43:13.067926  300419 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:43:13.067948  300419 start.go:496] detecting cgroup driver to use...
	I1217 00:43:13.067977  300419 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:43:13.068042  300419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:43:13.085483  300419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:43:13.033439  301437 config.go:182] Loaded profile config "embed-certs-153232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:13.034151  301437 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:43:13.060158  301437 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:43:13.060303  301437 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:43:13.119210  301437 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:83 SystemTime:2025-12-17 00:43:13.109693366 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:43:13.119314  301437 docker.go:319] overlay module found
	I1217 00:43:13.120948  301437 out.go:179] * Using the docker driver based on existing profile
	I1217 00:43:13.122303  301437 start.go:309] selected driver: docker
	I1217 00:43:13.122316  301437 start.go:927] validating driver "docker" against &{Name:embed-certs-153232 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-153232 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:43:13.122392  301437 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:43:13.122931  301437 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:43:13.189846  301437 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:83 SystemTime:2025-12-17 00:43:13.179729115 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:43:13.190209  301437 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:43:13.190243  301437 cni.go:84] Creating CNI manager for ""
	I1217 00:43:13.190312  301437 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:43:13.190361  301437 start.go:353] cluster config:
	{Name:embed-certs-153232 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-153232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:43:13.192222  301437 out.go:179] * Starting "embed-certs-153232" primary control-plane node in "embed-certs-153232" cluster
	I1217 00:43:13.193581  301437 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:43:13.194844  301437 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:43:13.196084  301437 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:43:13.196117  301437 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1217 00:43:13.196147  301437 cache.go:65] Caching tarball of preloaded images
	I1217 00:43:13.196178  301437 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:43:13.196220  301437 preload.go:238] Found /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 00:43:13.196227  301437 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 00:43:13.196313  301437 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/config.json ...
	I1217 00:43:13.216971  301437 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:43:13.217009  301437 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:43:13.217030  301437 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:43:13.217063  301437 start.go:360] acquireMachinesLock for embed-certs-153232: {Name:mkd806ec7efded4ac7bfe60ed725b3bbcfe0e575 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:43:13.217138  301437 start.go:364] duration metric: took 38.193µs to acquireMachinesLock for "embed-certs-153232"
	I1217 00:43:13.217158  301437 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:43:13.217166  301437 fix.go:54] fixHost starting: 
	I1217 00:43:13.217354  301437 cli_runner.go:164] Run: docker container inspect embed-certs-153232 --format={{.State.Status}}
	I1217 00:43:13.237231  301437 fix.go:112] recreateIfNeeded on embed-certs-153232: state=Stopped err=<nil>
	W1217 00:43:13.237264  301437 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:43:13.101382  300419 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:43:13.101449  300419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:43:13.116971  300419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:43:13.130586  300419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:43:13.225729  300419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:43:13.318717  300419 docker.go:234] disabling docker service ...
	I1217 00:43:13.318780  300419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:43:13.334266  300419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:43:13.352470  300419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:43:13.437263  300419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:43:13.525276  300419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:43:13.539199  300419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:43:13.556581  300419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:43:13.556662  300419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:13.566443  300419 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:43:13.566498  300419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:13.576145  300419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:13.586273  300419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:13.596229  300419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:43:13.605168  300419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:13.614524  300419 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:13.622713  300419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:13.631257  300419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:43:13.638535  300419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:43:13.647048  300419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:13.748333  300419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:43:13.908411  300419 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:43:13.908477  300419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:43:13.912591  300419 start.go:564] Will wait 60s for crictl version
	I1217 00:43:13.912642  300419 ssh_runner.go:195] Run: which crictl
	I1217 00:43:13.916604  300419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:43:13.941483  300419 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:43:13.941540  300419 ssh_runner.go:195] Run: crio --version
	I1217 00:43:13.968259  300419 ssh_runner.go:195] Run: crio --version
	I1217 00:43:13.997467  300419 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1217 00:43:13.998577  300419 cli_runner.go:164] Run: docker network inspect newest-cni-653717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:43:14.015370  300419 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1217 00:43:14.019356  300419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:43:14.030806  300419 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1217 00:43:10.801972  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	W1217 00:43:12.803027  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	I1217 00:43:14.031936  300419 kubeadm.go:884] updating cluster {Name:newest-cni-653717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-653717 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:43:14.032097  300419 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:43:14.032141  300419 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:43:14.063936  300419 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:43:14.063958  300419 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:43:14.064029  300419 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:43:14.087668  300419 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:43:14.087689  300419 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:43:14.087697  300419 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1217 00:43:14.087798  300419 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-653717 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-653717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:43:14.087898  300419 ssh_runner.go:195] Run: crio config
	I1217 00:43:14.133154  300419 cni.go:84] Creating CNI manager for ""
	I1217 00:43:14.133186  300419 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:43:14.133212  300419 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 00:43:14.133243  300419 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-653717 NodeName:newest-cni-653717 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:43:14.133845  300419 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-653717"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:43:14.133919  300419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:43:14.141716  300419 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:43:14.141762  300419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:43:14.149070  300419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1217 00:43:14.161125  300419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:43:14.173036  300419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1217 00:43:14.184726  300419 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:43:14.188144  300419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:43:14.197550  300419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:14.275214  300419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:43:14.299766  300419 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717 for IP: 192.168.94.2
	I1217 00:43:14.299790  300419 certs.go:195] generating shared ca certs ...
	I1217 00:43:14.299808  300419 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:14.300005  300419 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:43:14.300070  300419 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:43:14.300093  300419 certs.go:257] generating profile certs ...
	I1217 00:43:14.300204  300419 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/client.key
	I1217 00:43:14.300278  300419 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.key.17c07d81
	I1217 00:43:14.300344  300419 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.key
	I1217 00:43:14.300489  300419 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:43:14.300535  300419 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:43:14.300550  300419 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:43:14.300597  300419 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:43:14.300636  300419 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:43:14.300673  300419 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:43:14.300732  300419 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:43:14.301517  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:43:14.321493  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:43:14.339778  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:43:14.358392  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:43:14.381070  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:43:14.399416  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 00:43:14.416061  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:43:14.432412  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:43:14.448688  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:43:14.465560  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:43:14.482118  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:43:14.500928  300419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:43:14.512899  300419 ssh_runner.go:195] Run: openssl version
	I1217 00:43:14.518834  300419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:43:14.525797  300419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:43:14.532879  300419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:43:14.536295  300419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:43:14.536344  300419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	I1217 00:43:14.570454  300419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:43:14.578196  300419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:14.585485  300419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:43:14.592558  300419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:14.595934  300419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:14.595973  300419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:14.630063  300419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:43:14.638031  300419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16354.pem
	I1217 00:43:14.645344  300419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16354.pem /etc/ssl/certs/16354.pem
	I1217 00:43:14.652719  300419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16354.pem
	I1217 00:43:14.656452  300419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:13 /usr/share/ca-certificates/16354.pem
	I1217 00:43:14.656491  300419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16354.pem
	I1217 00:43:14.691440  300419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:43:14.698802  300419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:43:14.702646  300419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:43:14.735879  300419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:43:14.770301  300419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:43:14.809543  300419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:43:14.854438  300419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:43:14.901721  300419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:43:14.954325  300419 kubeadm.go:401] StartCluster: {Name:newest-cni-653717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-653717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:43:14.954434  300419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:43:14.954495  300419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:43:14.989680  300419 cri.go:89] found id: "de646b7d108062a9c689d337e628f48d29162ce37e015f70f4dfde0b63fd7fe1"
	I1217 00:43:14.989705  300419 cri.go:89] found id: "f155d5d25fa50fc257ecd4b7da29e9d818c2cfa5f80f0f2c2dfa23e5b3025e69"
	I1217 00:43:14.989713  300419 cri.go:89] found id: "097160c44a70ef0edf501027a68baa41f06ea618f8b735835c64eb3b3c78f426"
	I1217 00:43:14.989719  300419 cri.go:89] found id: "608d066efbe101d85b2f7a5a7b16d1ad974b66b117a0796b4196b6e3e5f4c30a"
	I1217 00:43:14.989725  300419 cri.go:89] found id: ""
	I1217 00:43:14.989778  300419 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 00:43:15.002543  300419 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:43:15Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:43:15.002606  300419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:43:15.010245  300419 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:43:15.010268  300419 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:43:15.010304  300419 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:43:15.017281  300419 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:43:15.018086  300419 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-653717" does not appear in /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:43:15.018523  300419 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-12816/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-653717" cluster setting kubeconfig missing "newest-cni-653717" context setting]
	I1217 00:43:15.019385  300419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:15.021191  300419 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:43:15.028401  300419 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1217 00:43:15.028428  300419 kubeadm.go:602] duration metric: took 18.153636ms to restartPrimaryControlPlane
	I1217 00:43:15.028437  300419 kubeadm.go:403] duration metric: took 74.123324ms to StartCluster
	I1217 00:43:15.028450  300419 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:15.028515  300419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:43:15.029409  300419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:15.029593  300419 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:43:15.029661  300419 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:43:15.029760  300419 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-653717"
	I1217 00:43:15.029776  300419 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-653717"
	W1217 00:43:15.029784  300419 addons.go:248] addon storage-provisioner should already be in state true
	I1217 00:43:15.029777  300419 addons.go:70] Setting dashboard=true in profile "newest-cni-653717"
	I1217 00:43:15.029804  300419 addons.go:239] Setting addon dashboard=true in "newest-cni-653717"
	I1217 00:43:15.029810  300419 host.go:66] Checking if "newest-cni-653717" exists ...
	I1217 00:43:15.029803  300419 addons.go:70] Setting default-storageclass=true in profile "newest-cni-653717"
	I1217 00:43:15.029832  300419 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-653717"
	W1217 00:43:15.029814  300419 addons.go:248] addon dashboard should already be in state true
	I1217 00:43:15.029895  300419 host.go:66] Checking if "newest-cni-653717" exists ...
	I1217 00:43:15.030178  300419 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:43:15.029785  300419 config.go:182] Loaded profile config "newest-cni-653717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:43:15.030316  300419 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:43:15.030406  300419 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:43:15.033423  300419 out.go:179] * Verifying Kubernetes components...
	I1217 00:43:15.034444  300419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:15.055791  300419 addons.go:239] Setting addon default-storageclass=true in "newest-cni-653717"
	W1217 00:43:15.056419  300419 addons.go:248] addon default-storageclass should already be in state true
	I1217 00:43:15.056521  300419 host.go:66] Checking if "newest-cni-653717" exists ...
	I1217 00:43:15.057770  300419 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:43:15.058417  300419 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 00:43:15.058417  300419 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:43:15.059574  300419 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:43:15.059606  300419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:43:15.059653  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:15.059575  300419 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 00:43:15.060919  300419 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 00:43:15.060937  300419 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 00:43:15.061206  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:15.094486  300419 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:43:15.094507  300419 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:43:15.094581  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:15.096483  300419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:43:15.101728  300419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:43:15.120293  300419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:43:15.176431  300419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:43:15.188738  300419 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:43:15.188802  300419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:15.202898  300419 api_server.go:72] duration metric: took 173.270552ms to wait for apiserver process to appear ...
	I1217 00:43:15.202924  300419 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:43:15.202948  300419 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 00:43:15.205761  300419 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 00:43:15.205784  300419 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 00:43:15.211796  300419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:43:15.220428  300419 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 00:43:15.220451  300419 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 00:43:15.225893  300419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:43:15.235805  300419 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 00:43:15.235823  300419 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 00:43:15.248794  300419 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 00:43:15.248822  300419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 00:43:15.261412  300419 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 00:43:15.261432  300419 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 00:43:15.274541  300419 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 00:43:15.274566  300419 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 00:43:15.287628  300419 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 00:43:15.287647  300419 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 00:43:15.302277  300419 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 00:43:15.302302  300419 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 00:43:15.314969  300419 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 00:43:15.315021  300419 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 00:43:15.327529  300419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 00:43:16.506889  300419 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 00:43:16.506915  300419 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 00:43:16.506930  300419 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 00:43:16.512154  300419 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 00:43:16.512199  300419 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 00:43:16.704136  300419 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 00:43:16.708886  300419 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 00:43:16.708913  300419 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 00:43:17.039543  300419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.827715953s)
	I1217 00:43:17.039628  300419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.813707626s)
	I1217 00:43:17.039738  300419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.712174331s)
	I1217 00:43:17.041405  300419 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-653717 addons enable metrics-server
	
	I1217 00:43:17.050070  300419 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1217 00:43:17.051118  300419 addons.go:530] duration metric: took 2.02146311s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 00:43:17.203089  300419 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 00:43:17.207726  300419 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 00:43:17.207752  300419 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 00:43:17.703086  300419 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 00:43:17.708106  300419 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1217 00:43:17.709199  300419 api_server.go:141] control plane version: v1.35.0-beta.0
	I1217 00:43:17.709228  300419 api_server.go:131] duration metric: took 2.506295852s to wait for apiserver health ...
	I1217 00:43:17.709241  300419 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:43:17.712970  300419 system_pods.go:59] 8 kube-system pods found
	I1217 00:43:17.713053  300419 system_pods.go:61] "coredns-7d764666f9-djwjl" [741342b4-626d-4282-ba19-0e8b37eb2556] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 00:43:17.713080  300419 system_pods.go:61] "etcd-newest-cni-653717" [8210d4d5-f66f-43fe-b160-e85265f0dcd0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 00:43:17.713097  300419 system_pods.go:61] "kindnet-xmw8c" [7688d3d1-e8d9-4b27-bd63-412f8972c114] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 00:43:17.713109  300419 system_pods.go:61] "kube-apiserver-newest-cni-653717" [2a8f1a0d-5c29-49c7-b857-e82bc22e048f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 00:43:17.713130  300419 system_pods.go:61] "kube-controller-manager-newest-cni-653717" [d368a2d6-d0bf-4119-982a-d08d313d1433] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 00:43:17.713142  300419 system_pods.go:61] "kube-proxy-9jd8t" [e7d2bcca-b703-4fd2-9af0-c08825a47e85] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 00:43:17.713152  300419 system_pods.go:61] "kube-scheduler-newest-cni-653717" [f17c94c7-8363-4f0d-a31c-6db9a2b0f14c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 00:43:17.713163  300419 system_pods.go:61] "storage-provisioner" [e5c636ed-8536-4f92-8033-757cda2e5a8e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 00:43:17.713175  300419 system_pods.go:74] duration metric: took 3.926365ms to wait for pod list to return data ...
	I1217 00:43:17.713187  300419 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:43:17.715539  300419 default_sa.go:45] found service account: "default"
	I1217 00:43:17.715559  300419 default_sa.go:55] duration metric: took 2.36225ms for default service account to be created ...
	I1217 00:43:17.715572  300419 kubeadm.go:587] duration metric: took 2.68594841s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 00:43:17.715594  300419 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:43:17.717862  300419 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 00:43:17.717894  300419 node_conditions.go:123] node cpu capacity is 8
	I1217 00:43:17.717911  300419 node_conditions.go:105] duration metric: took 2.311759ms to run NodePressure ...
	I1217 00:43:17.717927  300419 start.go:242] waiting for startup goroutines ...
	I1217 00:43:17.717941  300419 start.go:247] waiting for cluster config update ...
	I1217 00:43:17.717955  300419 start.go:256] writing updated cluster config ...
	I1217 00:43:17.718209  300419 ssh_runner.go:195] Run: rm -f paused
	I1217 00:43:17.778087  300419 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1217 00:43:17.780314  300419 out.go:179] * Done! kubectl is now configured to use "newest-cni-653717" cluster and "default" namespace by default
	I1217 00:43:13.239674  301437 out.go:252] * Restarting existing docker container for "embed-certs-153232" ...
	I1217 00:43:13.239739  301437 cli_runner.go:164] Run: docker start embed-certs-153232
	I1217 00:43:13.518313  301437 cli_runner.go:164] Run: docker container inspect embed-certs-153232 --format={{.State.Status}}
	I1217 00:43:13.538713  301437 kic.go:430] container "embed-certs-153232" state is running.
	I1217 00:43:13.539243  301437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-153232
	I1217 00:43:13.561482  301437 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/config.json ...
	I1217 00:43:13.561678  301437 machine.go:94] provisionDockerMachine start ...
	I1217 00:43:13.561743  301437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:43:13.581191  301437 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:13.581560  301437 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1217 00:43:13.581584  301437 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:43:13.582216  301437 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45578->127.0.0.1:33098: read: connection reset by peer
	I1217 00:43:16.730553  301437 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-153232
	
	I1217 00:43:16.730580  301437 ubuntu.go:182] provisioning hostname "embed-certs-153232"
	I1217 00:43:16.730642  301437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:43:16.752281  301437 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:16.752586  301437 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1217 00:43:16.752607  301437 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-153232 && echo "embed-certs-153232" | sudo tee /etc/hostname
	I1217 00:43:16.896361  301437 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-153232
	
	I1217 00:43:16.896439  301437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:43:16.920667  301437 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:16.920984  301437 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1217 00:43:16.921041  301437 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-153232' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-153232/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-153232' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:43:17.053232  301437 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:43:17.053276  301437 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:43:17.053302  301437 ubuntu.go:190] setting up certificates
	I1217 00:43:17.053314  301437 provision.go:84] configureAuth start
	I1217 00:43:17.053376  301437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-153232
	I1217 00:43:17.072569  301437 provision.go:143] copyHostCerts
	I1217 00:43:17.072650  301437 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:43:17.072666  301437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:43:17.072747  301437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:43:17.072921  301437 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:43:17.072932  301437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:43:17.072976  301437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:43:17.073078  301437 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:43:17.073090  301437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:43:17.073128  301437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:43:17.073199  301437 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.embed-certs-153232 san=[127.0.0.1 192.168.85.2 embed-certs-153232 localhost minikube]
	I1217 00:43:17.178876  301437 provision.go:177] copyRemoteCerts
	I1217 00:43:17.178931  301437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:43:17.178961  301437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:43:17.196660  301437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/embed-certs-153232/id_rsa Username:docker}
	I1217 00:43:17.289862  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:43:17.307299  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1217 00:43:17.324420  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:43:17.341134  301437 provision.go:87] duration metric: took 287.803695ms to configureAuth
	I1217 00:43:17.341160  301437 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:43:17.341354  301437 config.go:182] Loaded profile config "embed-certs-153232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:17.341461  301437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:43:17.358979  301437 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:17.359220  301437 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1217 00:43:17.359239  301437 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:43:17.727983  301437 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:43:17.728020  301437 machine.go:97] duration metric: took 4.166327334s to provisionDockerMachine
	I1217 00:43:17.728033  301437 start.go:293] postStartSetup for "embed-certs-153232" (driver="docker")
	I1217 00:43:17.728045  301437 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:43:17.728101  301437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:43:17.728142  301437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:43:17.748654  301437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/embed-certs-153232/id_rsa Username:docker}
	I1217 00:43:17.846509  301437 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:43:17.850221  301437 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:43:17.850243  301437 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:43:17.850252  301437 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:43:17.850296  301437 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:43:17.850370  301437 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:43:17.850481  301437 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:43:17.857806  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:43:17.875672  301437 start.go:296] duration metric: took 147.626477ms for postStartSetup
	I1217 00:43:17.875743  301437 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:43:17.875789  301437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:43:17.896453  301437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/embed-certs-153232/id_rsa Username:docker}
	I1217 00:43:17.988329  301437 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:43:17.992617  301437 fix.go:56] duration metric: took 4.775446798s for fixHost
	I1217 00:43:17.992636  301437 start.go:83] releasing machines lock for "embed-certs-153232", held for 4.77548866s
	I1217 00:43:17.992701  301437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-153232
	I1217 00:43:18.011378  301437 ssh_runner.go:195] Run: cat /version.json
	I1217 00:43:18.011453  301437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:43:18.011508  301437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:43:18.011572  301437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:43:18.031047  301437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/embed-certs-153232/id_rsa Username:docker}
	I1217 00:43:18.031364  301437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/embed-certs-153232/id_rsa Username:docker}
	I1217 00:43:18.124007  301437 ssh_runner.go:195] Run: systemctl --version
	I1217 00:43:18.191518  301437 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:43:18.231195  301437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:43:18.236329  301437 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:43:18.236388  301437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:43:18.244880  301437 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:43:18.244899  301437 start.go:496] detecting cgroup driver to use...
	I1217 00:43:18.244930  301437 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:43:18.244971  301437 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:43:18.259621  301437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:43:18.272633  301437 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:43:18.272687  301437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:43:18.286128  301437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:43:18.298271  301437 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:43:18.395785  301437 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:43:18.493494  301437 docker.go:234] disabling docker service ...
	I1217 00:43:18.493559  301437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:43:18.509760  301437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:43:18.524655  301437 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:43:18.623023  301437 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:43:18.710293  301437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:43:18.723406  301437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:43:18.736848  301437 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:43:18.736910  301437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:18.745575  301437 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:43:18.745628  301437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:18.754055  301437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:18.762572  301437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:18.771966  301437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:43:18.779835  301437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:18.788694  301437 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:18.797626  301437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:18.807631  301437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:43:18.815247  301437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:43:18.823739  301437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:18.904870  301437 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:43:19.049353  301437 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:43:19.049410  301437 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:43:19.053190  301437 start.go:564] Will wait 60s for crictl version
	I1217 00:43:19.053239  301437 ssh_runner.go:195] Run: which crictl
	I1217 00:43:19.056644  301437 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:43:19.081878  301437 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:43:19.081954  301437 ssh_runner.go:195] Run: crio --version
	I1217 00:43:19.108104  301437 ssh_runner.go:195] Run: crio --version
	I1217 00:43:19.137756  301437 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1217 00:43:14.803418  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	W1217 00:43:16.804199  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	W1217 00:43:19.303496  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	I1217 00:43:19.138849  301437 cli_runner.go:164] Run: docker network inspect embed-certs-153232 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:43:19.156769  301437 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 00:43:19.160695  301437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:43:19.171432  301437 kubeadm.go:884] updating cluster {Name:embed-certs-153232 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-153232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:43:19.171600  301437 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:43:19.171663  301437 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:43:19.201717  301437 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:43:19.201761  301437 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:43:19.201804  301437 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:43:19.230225  301437 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:43:19.230250  301437 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:43:19.230261  301437 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1217 00:43:19.230403  301437 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-153232 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-153232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:43:19.230488  301437 ssh_runner.go:195] Run: crio config
	I1217 00:43:19.285651  301437 cni.go:84] Creating CNI manager for ""
	I1217 00:43:19.285678  301437 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:43:19.285695  301437 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:43:19.285721  301437 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-153232 NodeName:embed-certs-153232 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:43:19.285887  301437 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-153232"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:43:19.285957  301437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 00:43:19.293949  301437 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:43:19.294013  301437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:43:19.301937  301437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1217 00:43:19.314092  301437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 00:43:19.326389  301437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1217 00:43:19.338404  301437 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:43:19.342368  301437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:43:19.352249  301437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:19.450887  301437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:43:19.476061  301437 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232 for IP: 192.168.85.2
	I1217 00:43:19.476081  301437 certs.go:195] generating shared ca certs ...
	I1217 00:43:19.476102  301437 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:19.476257  301437 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:43:19.476315  301437 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:43:19.476328  301437 certs.go:257] generating profile certs ...
	I1217 00:43:19.476450  301437 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/client.key
	I1217 00:43:19.476538  301437 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/apiserver.key.9c5b6ce4
	I1217 00:43:19.476607  301437 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/proxy-client.key
	I1217 00:43:19.476783  301437 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:43:19.476845  301437 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:43:19.476859  301437 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:43:19.476896  301437 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:43:19.476938  301437 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:43:19.476980  301437 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:43:19.477065  301437 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:43:19.477864  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:43:19.496400  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:43:19.514827  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:43:19.534257  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:43:19.558896  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1217 00:43:19.577374  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:43:19.593935  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:43:19.610639  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:43:19.629006  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:43:19.656848  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:43:19.675161  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:43:19.691709  301437 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:43:19.708839  301437 ssh_runner.go:195] Run: openssl version
	I1217 00:43:19.714839  301437 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:19.723416  301437 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:43:19.733187  301437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:19.738445  301437 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:19.738503  301437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:19.791538  301437 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:43:19.799763  301437 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16354.pem
	I1217 00:43:19.808195  301437 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16354.pem /etc/ssl/certs/16354.pem
	I1217 00:43:19.815917  301437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16354.pem
	I1217 00:43:19.820648  301437 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:13 /usr/share/ca-certificates/16354.pem
	I1217 00:43:19.820700  301437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16354.pem
	I1217 00:43:19.862242  301437 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:43:19.871343  301437 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:43:19.879739  301437 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:43:19.887242  301437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:43:19.890787  301437 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:43:19.890832  301437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	I1217 00:43:19.926488  301437 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:43:19.934257  301437 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:43:19.937849  301437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:43:19.974804  301437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:43:20.020581  301437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:43:20.062588  301437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:43:20.122228  301437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:43:20.176358  301437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:43:20.211282  301437 kubeadm.go:401] StartCluster: {Name:embed-certs-153232 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-153232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:43:20.211381  301437 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:43:20.211436  301437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:43:20.244491  301437 cri.go:89] found id: "dadde2213b8a894873343cf42602c1bedb001a3311bd9672a69d0fa4a07d9786"
	I1217 00:43:20.244513  301437 cri.go:89] found id: "117e1e782a79833091ca7f1a9da4be915158517d3d54c5674f3b4e0875f18cce"
	I1217 00:43:20.244519  301437 cri.go:89] found id: "f3a000d40d6d7ebc54a27ecd08dc5aa3b530c6e66b7327ec3ec09941fca5d2ce"
	I1217 00:43:20.244523  301437 cri.go:89] found id: "a770bc08061f975f567cb7fb7cec6883ec6d5215d19863d7ddb2cc0049571d8b"
	I1217 00:43:20.244527  301437 cri.go:89] found id: ""
	I1217 00:43:20.244573  301437 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 00:43:20.256662  301437 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:43:20Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:43:20.256756  301437 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:43:20.265068  301437 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:43:20.265087  301437 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:43:20.265131  301437 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:43:20.274191  301437 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:43:20.275439  301437 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-153232" does not appear in /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:43:20.276139  301437 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-12816/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-153232" cluster setting kubeconfig missing "embed-certs-153232" context setting]
	I1217 00:43:20.277088  301437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:20.279505  301437 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:43:20.288899  301437 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1217 00:43:20.288932  301437 kubeadm.go:602] duration metric: took 23.834476ms to restartPrimaryControlPlane
	I1217 00:43:20.288942  301437 kubeadm.go:403] duration metric: took 77.669136ms to StartCluster
	I1217 00:43:20.288957  301437 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:20.289043  301437 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:43:20.291231  301437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:20.291470  301437 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:43:20.291526  301437 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:43:20.291628  301437 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-153232"
	I1217 00:43:20.291649  301437 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-153232"
	W1217 00:43:20.291661  301437 addons.go:248] addon storage-provisioner should already be in state true
	I1217 00:43:20.291657  301437 addons.go:70] Setting dashboard=true in profile "embed-certs-153232"
	I1217 00:43:20.291679  301437 addons.go:239] Setting addon dashboard=true in "embed-certs-153232"
	W1217 00:43:20.291691  301437 addons.go:248] addon dashboard should already be in state true
	I1217 00:43:20.291692  301437 host.go:66] Checking if "embed-certs-153232" exists ...
	I1217 00:43:20.291706  301437 config.go:182] Loaded profile config "embed-certs-153232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:20.291718  301437 addons.go:70] Setting default-storageclass=true in profile "embed-certs-153232"
	I1217 00:43:20.291738  301437 host.go:66] Checking if "embed-certs-153232" exists ...
	I1217 00:43:20.291753  301437 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-153232"
	I1217 00:43:20.292122  301437 cli_runner.go:164] Run: docker container inspect embed-certs-153232 --format={{.State.Status}}
	I1217 00:43:20.292245  301437 cli_runner.go:164] Run: docker container inspect embed-certs-153232 --format={{.State.Status}}
	I1217 00:43:20.292252  301437 cli_runner.go:164] Run: docker container inspect embed-certs-153232 --format={{.State.Status}}
	I1217 00:43:20.294162  301437 out.go:179] * Verifying Kubernetes components...
	I1217 00:43:20.295150  301437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:20.318342  301437 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:43:20.318343  301437 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 00:43:20.319356  301437 addons.go:239] Setting addon default-storageclass=true in "embed-certs-153232"
	W1217 00:43:20.319377  301437 addons.go:248] addon default-storageclass should already be in state true
	I1217 00:43:20.319402  301437 host.go:66] Checking if "embed-certs-153232" exists ...
	I1217 00:43:20.319532  301437 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:43:20.319546  301437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:43:20.319598  301437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:43:20.319845  301437 cli_runner.go:164] Run: docker container inspect embed-certs-153232 --format={{.State.Status}}
	I1217 00:43:20.321455  301437 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.676364961Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.679598075Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=970c12eb-d302-4902-982a-e17fa7460ebd name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.680093257Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=b5e3616c-d6d5-4f9b-974a-5a799e82d59c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.68095013Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.68158869Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.681768467Z" level=info msg="Ran pod sandbox 69f7c5ef1f5464482d5deaa8b03d5e7d7cba099fbe94cfe84e602f763a11268c with infra container: kube-system/kindnet-xmw8c/POD" id=970c12eb-d302-4902-982a-e17fa7460ebd name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.682518702Z" level=info msg="Ran pod sandbox e070405644c0a834a9e1609e11c5b5d30e0cb39c5148560aad953a9f5de4ce88 with infra container: kube-system/kube-proxy-9jd8t/POD" id=b5e3616c-d6d5-4f9b-974a-5a799e82d59c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.682852572Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f2f5b42c-2df0-404e-b106-b340569884ff name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.68352994Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=fe67e68a-058e-4430-92b9-2b5a8feefd8d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.683800439Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=70895c75-d02b-421e-83ec-42593dd42d3f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.684470528Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=84f8e91c-5e5b-4740-b1a5-884c78a9e74a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.684822501Z" level=info msg="Creating container: kube-system/kindnet-xmw8c/kindnet-cni" id=91eca6c8-378b-432b-830d-82f7ead610ea name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.685118698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.685624609Z" level=info msg="Creating container: kube-system/kube-proxy-9jd8t/kube-proxy" id=9f3ed6f9-e074-4d7b-a7a7-ab8e1c19f401 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.685795434Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.690071673Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.690504708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.692382945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.692972469Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.718531045Z" level=info msg="Created container 053bf92e9a5f9a52b4b6ab67762abf5f59e6048f064f1e157e90fe6500e59fb1: kube-system/kindnet-xmw8c/kindnet-cni" id=91eca6c8-378b-432b-830d-82f7ead610ea name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.719216413Z" level=info msg="Starting container: 053bf92e9a5f9a52b4b6ab67762abf5f59e6048f064f1e157e90fe6500e59fb1" id=fafb20f4-1f0e-4b24-8041-9a1f9e21e943 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.720666623Z" level=info msg="Created container 09db06ede95c5fa90e1d8add618ad4ec6e4856dc88d34335e62d7d1b21d156f1: kube-system/kube-proxy-9jd8t/kube-proxy" id=9f3ed6f9-e074-4d7b-a7a7-ab8e1c19f401 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.721148259Z" level=info msg="Started container" PID=1057 containerID=053bf92e9a5f9a52b4b6ab67762abf5f59e6048f064f1e157e90fe6500e59fb1 description=kube-system/kindnet-xmw8c/kindnet-cni id=fafb20f4-1f0e-4b24-8041-9a1f9e21e943 name=/runtime.v1.RuntimeService/StartContainer sandboxID=69f7c5ef1f5464482d5deaa8b03d5e7d7cba099fbe94cfe84e602f763a11268c
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.721229635Z" level=info msg="Starting container: 09db06ede95c5fa90e1d8add618ad4ec6e4856dc88d34335e62d7d1b21d156f1" id=892a0b7d-8e33-49fb-9719-fc708637f597 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.724172507Z" level=info msg="Started container" PID=1058 containerID=09db06ede95c5fa90e1d8add618ad4ec6e4856dc88d34335e62d7d1b21d156f1 description=kube-system/kube-proxy-9jd8t/kube-proxy id=892a0b7d-8e33-49fb-9719-fc708637f597 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e070405644c0a834a9e1609e11c5b5d30e0cb39c5148560aad953a9f5de4ce88
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	09db06ede95c5       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   4 seconds ago       Running             kube-proxy                1                   e070405644c0a       kube-proxy-9jd8t                            kube-system
	053bf92e9a5f9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   69f7c5ef1f546       kindnet-xmw8c                               kube-system
	de646b7d10806       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   7 seconds ago       Running             kube-apiserver            1                   ec71189f7fad3       kube-apiserver-newest-cni-653717            kube-system
	f155d5d25fa50       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   7 seconds ago       Running             kube-controller-manager   1                   19550e8e69a45       kube-controller-manager-newest-cni-653717   kube-system
	097160c44a70e       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   7 seconds ago       Running             etcd                      1                   2aaad1633c885       etcd-newest-cni-653717                      kube-system
	608d066efbe10       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   7 seconds ago       Running             kube-scheduler            1                   e914c561da2a6       kube-scheduler-newest-cni-653717            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-653717
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-653717
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=newest-cni-653717
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_42_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:42:54 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-653717
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:43:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:43:16 +0000   Wed, 17 Dec 2025 00:42:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:43:16 +0000   Wed, 17 Dec 2025 00:42:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:43:16 +0000   Wed, 17 Dec 2025 00:42:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 17 Dec 2025 00:43:16 +0000   Wed, 17 Dec 2025 00:42:52 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-653717
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                50a52395-faf4-409e-a6ea-aa486ab479f3
	  Boot ID:                    0e9cedc6-c46e-4354-b3d2-9272a8b33ae5
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-653717                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         26s
	  kube-system                 kindnet-xmw8c                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      21s
	  kube-system                 kube-apiserver-newest-cni-653717             250m (3%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-controller-manager-newest-cni-653717    200m (2%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-proxy-9jd8t                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 kube-scheduler-newest-cni-653717             100m (1%)     0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  21s   node-controller  Node newest-cni-653717 event: Registered Node newest-cni-653717 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-653717 event: Registered Node newest-cni-653717 in Controller
	
	
	==> dmesg <==
	[  +0.089382] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024236] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.864694] kauditd_printk_skb: 47 callbacks suppressed
	[Dec17 00:07] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.006904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +2.048755] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +4.030595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +8.447143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[ +16.382404] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000015] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[Dec17 00:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	
	
	==> etcd [097160c44a70ef0edf501027a68baa41f06ea618f8b735835c64eb3b3c78f426] <==
	{"level":"warn","ts":"2025-12-17T00:43:15.918148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:15.928902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:15.935185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:15.942716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:15.948848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:15.955608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:15.961536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:15.968517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:15.974806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:15.987185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:15.993204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.000232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.006193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.012438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.018541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.024685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.031433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.037627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.043871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.050824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.065880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.072288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.079928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.087893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.140539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46458","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:43:22 up  1:25,  0 user,  load average: 3.63, 2.89, 1.96
	Linux newest-cni-653717 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [053bf92e9a5f9a52b4b6ab67762abf5f59e6048f064f1e157e90fe6500e59fb1] <==
	I1217 00:43:17.932471       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 00:43:17.932700       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1217 00:43:17.932816       1 main.go:148] setting mtu 1500 for CNI 
	I1217 00:43:17.932836       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 00:43:17.932853       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T00:43:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 00:43:18.135027       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 00:43:18.135079       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 00:43:18.135094       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 00:43:18.135226       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 00:43:18.435537       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 00:43:18.435595       1 metrics.go:72] Registering metrics
	I1217 00:43:18.435702       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [de646b7d108062a9c689d337e628f48d29162ce37e015f70f4dfde0b63fd7fe1] <==
	I1217 00:43:16.589361       1 aggregator.go:187] initial CRD sync complete...
	I1217 00:43:16.589391       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 00:43:16.589398       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 00:43:16.589405       1 cache.go:39] Caches are synced for autoregister controller
	E1217 00:43:16.589782       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 00:43:16.590051       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 00:43:16.590070       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 00:43:16.594192       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 00:43:16.602071       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 00:43:16.606372       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:16.606390       1 policy_source.go:248] refreshing policies
	I1217 00:43:16.606408       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 00:43:16.621657       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 00:43:16.858691       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 00:43:16.884148       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 00:43:16.902105       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 00:43:16.908038       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 00:43:16.913481       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 00:43:16.943070       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.105.94"}
	I1217 00:43:16.955291       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.9.195"}
	I1217 00:43:17.486342       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 00:43:20.126573       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 00:43:20.275418       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 00:43:20.380117       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 00:43:20.451512       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [f155d5d25fa50fc257ecd4b7da29e9d818c2cfa5f80f0f2c2dfa23e5b3025e69] <==
	I1217 00:43:19.730225       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.730915       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.730975       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.731066       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.731117       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.731153       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.731118       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.731174       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.731126       1 range_allocator.go:177] "Sending events to api server"
	I1217 00:43:19.731105       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.731134       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.731227       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.731244       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1217 00:43:19.731250       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:43:19.731255       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.731259       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1217 00:43:19.731144       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.731319       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-653717"
	I1217 00:43:19.731385       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1217 00:43:19.735701       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:43:19.736784       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.831512       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.831535       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 00:43:19.831539       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 00:43:19.836889       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [09db06ede95c5fa90e1d8add618ad4ec6e4856dc88d34335e62d7d1b21d156f1] <==
	I1217 00:43:17.765521       1 server_linux.go:53] "Using iptables proxy"
	I1217 00:43:17.834345       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:43:17.934770       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:17.934816       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1217 00:43:17.935085       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 00:43:17.953144       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 00:43:17.953212       1 server_linux.go:136] "Using iptables Proxier"
	I1217 00:43:17.958001       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 00:43:17.958355       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1217 00:43:17.958379       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:43:17.959828       1 config.go:106] "Starting endpoint slice config controller"
	I1217 00:43:17.959865       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 00:43:17.959869       1 config.go:200] "Starting service config controller"
	I1217 00:43:17.959890       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 00:43:17.959929       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 00:43:17.959937       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 00:43:17.960005       1 config.go:309] "Starting node config controller"
	I1217 00:43:17.960017       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 00:43:17.960028       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 00:43:18.060910       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 00:43:18.060951       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 00:43:18.060951       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [608d066efbe101d85b2f7a5a7b16d1ad974b66b117a0796b4196b6e3e5f4c30a] <==
	I1217 00:43:15.222980       1 serving.go:386] Generated self-signed cert in-memory
	W1217 00:43:16.508440       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 00:43:16.508473       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 00:43:16.508484       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 00:43:16.508493       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 00:43:16.537748       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1217 00:43:16.537780       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:43:16.540607       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 00:43:16.540784       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:43:16.541036       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:43:16.541827       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 00:43:16.641559       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 00:43:16 newest-cni-653717 kubelet[677]: I1217 00:43:16.685583     677 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-653717"
	Dec 17 00:43:16 newest-cni-653717 kubelet[677]: E1217 00:43:16.691129     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-653717\" already exists" pod="kube-system/etcd-newest-cni-653717"
	Dec 17 00:43:16 newest-cni-653717 kubelet[677]: I1217 00:43:16.691157     677 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-653717"
	Dec 17 00:43:16 newest-cni-653717 kubelet[677]: E1217 00:43:16.698796     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-653717\" already exists" pod="kube-system/kube-apiserver-newest-cni-653717"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: I1217 00:43:17.365129     677 apiserver.go:52] "Watching apiserver"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: E1217 00:43:17.369831     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-653717" containerName="kube-controller-manager"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: E1217 00:43:17.404920     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-653717" containerName="kube-apiserver"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: I1217 00:43:17.405086     677 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-653717"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: I1217 00:43:17.405237     677 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-653717"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: E1217 00:43:17.411040     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-653717\" already exists" pod="kube-system/kube-scheduler-newest-cni-653717"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: E1217 00:43:17.411272     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-653717" containerName="kube-scheduler"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: E1217 00:43:17.411790     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-653717\" already exists" pod="kube-system/etcd-newest-cni-653717"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: E1217 00:43:17.411872     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-653717" containerName="etcd"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: I1217 00:43:17.469851     677 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: I1217 00:43:17.532930     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7d2bcca-b703-4fd2-9af0-c08825a47e85-lib-modules\") pod \"kube-proxy-9jd8t\" (UID: \"e7d2bcca-b703-4fd2-9af0-c08825a47e85\") " pod="kube-system/kube-proxy-9jd8t"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: I1217 00:43:17.533014     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7688d3d1-e8d9-4b27-bd63-412f8972c114-cni-cfg\") pod \"kindnet-xmw8c\" (UID: \"7688d3d1-e8d9-4b27-bd63-412f8972c114\") " pod="kube-system/kindnet-xmw8c"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: I1217 00:43:17.533089     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7688d3d1-e8d9-4b27-bd63-412f8972c114-lib-modules\") pod \"kindnet-xmw8c\" (UID: \"7688d3d1-e8d9-4b27-bd63-412f8972c114\") " pod="kube-system/kindnet-xmw8c"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: I1217 00:43:17.533149     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7d2bcca-b703-4fd2-9af0-c08825a47e85-xtables-lock\") pod \"kube-proxy-9jd8t\" (UID: \"e7d2bcca-b703-4fd2-9af0-c08825a47e85\") " pod="kube-system/kube-proxy-9jd8t"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: I1217 00:43:17.533391     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7688d3d1-e8d9-4b27-bd63-412f8972c114-xtables-lock\") pod \"kindnet-xmw8c\" (UID: \"7688d3d1-e8d9-4b27-bd63-412f8972c114\") " pod="kube-system/kindnet-xmw8c"
	Dec 17 00:43:18 newest-cni-653717 kubelet[677]: E1217 00:43:18.410491     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-653717" containerName="kube-scheduler"
	Dec 17 00:43:18 newest-cni-653717 kubelet[677]: E1217 00:43:18.410585     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-653717" containerName="etcd"
	Dec 17 00:43:18 newest-cni-653717 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 00:43:18 newest-cni-653717 kubelet[677]: I1217 00:43:18.783758     677 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 17 00:43:18 newest-cni-653717 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 00:43:18 newest-cni-653717 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-653717 -n newest-cni-653717
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-653717 -n newest-cni-653717: exit status 2 (322.135324ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-653717 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-djwjl storage-provisioner dashboard-metrics-scraper-867fb5f87b-8kf6f kubernetes-dashboard-b84665fb8-9x2f8
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-653717 describe pod coredns-7d764666f9-djwjl storage-provisioner dashboard-metrics-scraper-867fb5f87b-8kf6f kubernetes-dashboard-b84665fb8-9x2f8
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-653717 describe pod coredns-7d764666f9-djwjl storage-provisioner dashboard-metrics-scraper-867fb5f87b-8kf6f kubernetes-dashboard-b84665fb8-9x2f8: exit status 1 (69.926957ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-djwjl" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-8kf6f" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-9x2f8" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-653717 describe pod coredns-7d764666f9-djwjl storage-provisioner dashboard-metrics-scraper-867fb5f87b-8kf6f kubernetes-dashboard-b84665fb8-9x2f8: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-653717
helpers_test.go:244: (dbg) docker inspect newest-cni-653717:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "beff396f1ecf0ad7988c26d13bbede7e2b58ac17c04e57fcb9bdf8cdfddcf41e",
	        "Created": "2025-12-17T00:42:44.576413898Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300623,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:43:08.297869312Z",
	            "FinishedAt": "2025-12-17T00:43:07.467274633Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/beff396f1ecf0ad7988c26d13bbede7e2b58ac17c04e57fcb9bdf8cdfddcf41e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/beff396f1ecf0ad7988c26d13bbede7e2b58ac17c04e57fcb9bdf8cdfddcf41e/hostname",
	        "HostsPath": "/var/lib/docker/containers/beff396f1ecf0ad7988c26d13bbede7e2b58ac17c04e57fcb9bdf8cdfddcf41e/hosts",
	        "LogPath": "/var/lib/docker/containers/beff396f1ecf0ad7988c26d13bbede7e2b58ac17c04e57fcb9bdf8cdfddcf41e/beff396f1ecf0ad7988c26d13bbede7e2b58ac17c04e57fcb9bdf8cdfddcf41e-json.log",
	        "Name": "/newest-cni-653717",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-653717:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-653717",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "beff396f1ecf0ad7988c26d13bbede7e2b58ac17c04e57fcb9bdf8cdfddcf41e",
	                "LowerDir": "/var/lib/docker/overlay2/b3d705a839526a196f0f1ae4bd0a8c2a9760f4aba6266e16997c71c4dc1dfa7d-init/diff:/var/lib/docker/overlay2/594b812fd6d8db89dab322ea9e00d43dd555e9709fb5e6953e3873cce717392c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3d705a839526a196f0f1ae4bd0a8c2a9760f4aba6266e16997c71c4dc1dfa7d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3d705a839526a196f0f1ae4bd0a8c2a9760f4aba6266e16997c71c4dc1dfa7d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3d705a839526a196f0f1ae4bd0a8c2a9760f4aba6266e16997c71c4dc1dfa7d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-653717",
	                "Source": "/var/lib/docker/volumes/newest-cni-653717/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-653717",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-653717",
	                "name.minikube.sigs.k8s.io": "newest-cni-653717",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e847a821e2f9d0ab2b9d32264e8759b233987b34c2f1987177c3de5252eca881",
	            "SandboxKey": "/var/run/docker/netns/e847a821e2f9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-653717": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "978c2526e91c5a0b699851fa3eca8542bfa74ada0d698e43a470cd47adc72c7d",
	                    "EndpointID": "b4d8c4ca1cbe7e8db0efb55d0b4cffc0a887248f3855a8e14e5f5734e333aa5c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "b6:ed:7f:58:f7:ab",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-653717",
	                        "beff396f1ecf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-653717 -n newest-cni-653717
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-653717 -n newest-cni-653717: exit status 2 (334.26442ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-653717 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-803959    │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ delete  │ -p kubernetes-upgrade-803959                                                                                                                                                                                                                         │ kubernetes-upgrade-803959    │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ delete  │ -p disable-driver-mounts-827138                                                                                                                                                                                                                      │ disable-driver-mounts-827138 │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p default-k8s-diff-port-414413 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ addons  │ enable metrics-server -p no-preload-864613 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ stop    │ -p no-preload-864613 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ image   │ old-k8s-version-742860 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ pause   │ -p old-k8s-version-742860 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-864613 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p no-preload-864613 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ delete  │ -p old-k8s-version-742860                                                                                                                                                                                                                            │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ delete  │ -p old-k8s-version-742860                                                                                                                                                                                                                            │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p newest-cni-653717 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-153232 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ stop    │ -p embed-certs-153232 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable metrics-server -p newest-cni-653717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-414413 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ stop    │ -p newest-cni-653717 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ stop    │ -p default-k8s-diff-port-414413 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-653717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p newest-cni-653717 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-153232 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p embed-certs-153232 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ image   │ newest-cni-653717 image list --format=json                                                                                                                                                                                                           │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ pause   │ -p newest-cni-653717 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:43:13
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:43:13.020242  301437 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:43:13.020476  301437 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:43:13.020486  301437 out.go:374] Setting ErrFile to fd 2...
	I1217 00:43:13.020490  301437 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:43:13.020753  301437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:43:13.021247  301437 out.go:368] Setting JSON to false
	I1217 00:43:13.022383  301437 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5143,"bootTime":1765927050,"procs":303,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:43:13.022433  301437 start.go:143] virtualization: kvm guest
	I1217 00:43:13.024226  301437 out.go:179] * [embed-certs-153232] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:43:13.025825  301437 notify.go:221] Checking for updates...
	I1217 00:43:13.025832  301437 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:43:13.027383  301437 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:43:13.028603  301437 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:43:13.029712  301437 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:43:13.030758  301437 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:43:13.031785  301437 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:43:08.274769  300419 out.go:252] * Restarting existing docker container for "newest-cni-653717" ...
	I1217 00:43:08.274864  300419 cli_runner.go:164] Run: docker start newest-cni-653717
	I1217 00:43:08.530219  300419 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:43:08.549238  300419 kic.go:430] container "newest-cni-653717" state is running.
	I1217 00:43:08.549711  300419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-653717
	I1217 00:43:08.567874  300419 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/config.json ...
	I1217 00:43:08.568184  300419 machine.go:94] provisionDockerMachine start ...
	I1217 00:43:08.568267  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:08.585795  300419 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:08.586130  300419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1217 00:43:08.586157  300419 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:43:08.586687  300419 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34000->127.0.0.1:33093: read: connection reset by peer
	I1217 00:43:11.711445  300419 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-653717
	
	I1217 00:43:11.711472  300419 ubuntu.go:182] provisioning hostname "newest-cni-653717"
	I1217 00:43:11.711530  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:11.729003  300419 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:11.729241  300419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1217 00:43:11.729259  300419 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-653717 && echo "newest-cni-653717" | sudo tee /etc/hostname
	I1217 00:43:11.862934  300419 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-653717
	
	I1217 00:43:11.863058  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:11.880935  300419 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:11.881192  300419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1217 00:43:11.881226  300419 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-653717' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-653717/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-653717' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:43:12.008758  300419 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:43:12.008782  300419 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:43:12.008850  300419 ubuntu.go:190] setting up certificates
	I1217 00:43:12.008862  300419 provision.go:84] configureAuth start
	I1217 00:43:12.008908  300419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-653717
	I1217 00:43:12.027799  300419 provision.go:143] copyHostCerts
	I1217 00:43:12.027887  300419 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:43:12.027913  300419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:43:12.027987  300419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:43:12.028120  300419 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:43:12.028131  300419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:43:12.028186  300419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:43:12.028265  300419 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:43:12.028275  300419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:43:12.028312  300419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:43:12.028386  300419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.newest-cni-653717 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-653717]
	I1217 00:43:12.081830  300419 provision.go:177] copyRemoteCerts
	I1217 00:43:12.081888  300419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:43:12.081918  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:12.099779  300419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:43:12.190678  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:43:12.207513  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:43:12.223877  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:43:12.241141  300419 provision.go:87] duration metric: took 232.260945ms to configureAuth
	I1217 00:43:12.241167  300419 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:43:12.241341  300419 config.go:182] Loaded profile config "newest-cni-653717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:43:12.241425  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:12.259586  300419 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:12.259859  300419 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1217 00:43:12.259887  300419 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:43:12.536278  300419 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:43:12.536308  300419 machine.go:97] duration metric: took 3.968103953s to provisionDockerMachine
	I1217 00:43:12.536323  300419 start.go:293] postStartSetup for "newest-cni-653717" (driver="docker")
	I1217 00:43:12.536340  300419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:43:12.536410  300419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:43:12.536455  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:12.555094  300419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:43:12.647080  300419 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:43:12.650723  300419 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:43:12.650757  300419 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:43:12.650773  300419 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:43:12.650825  300419 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:43:12.650946  300419 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:43:12.651152  300419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:43:12.675027  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:43:12.699530  300419 start.go:296] duration metric: took 163.192934ms for postStartSetup
	I1217 00:43:12.699625  300419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:43:12.699669  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:12.718970  300419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:43:12.813385  300419 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:43:12.818537  300419 fix.go:56] duration metric: took 4.562240336s for fixHost
	I1217 00:43:12.818565  300419 start.go:83] releasing machines lock for "newest-cni-653717", held for 4.562291137s
	I1217 00:43:12.818630  300419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-653717
	I1217 00:43:12.839076  300419 ssh_runner.go:195] Run: cat /version.json
	I1217 00:43:12.839140  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:12.839157  300419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:43:12.839236  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:12.858954  300419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:43:12.859795  300419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:43:13.011144  300419 ssh_runner.go:195] Run: systemctl --version
	I1217 00:43:13.018259  300419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:43:13.054673  300419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:43:13.059674  300419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:43:13.059734  300419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:43:13.067926  300419 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:43:13.067948  300419 start.go:496] detecting cgroup driver to use...
	I1217 00:43:13.067977  300419 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:43:13.068042  300419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:43:13.085483  300419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:43:13.033439  301437 config.go:182] Loaded profile config "embed-certs-153232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:13.034151  301437 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:43:13.060158  301437 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:43:13.060303  301437 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:43:13.119210  301437 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:83 SystemTime:2025-12-17 00:43:13.109693366 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:43:13.119314  301437 docker.go:319] overlay module found
	I1217 00:43:13.120948  301437 out.go:179] * Using the docker driver based on existing profile
	I1217 00:43:13.122303  301437 start.go:309] selected driver: docker
	I1217 00:43:13.122316  301437 start.go:927] validating driver "docker" against &{Name:embed-certs-153232 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-153232 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:43:13.122392  301437 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:43:13.122931  301437 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:43:13.189846  301437 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:83 SystemTime:2025-12-17 00:43:13.179729115 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:43:13.190209  301437 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:43:13.190243  301437 cni.go:84] Creating CNI manager for ""
	I1217 00:43:13.190312  301437 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:43:13.190361  301437 start.go:353] cluster config:
	{Name:embed-certs-153232 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-153232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:43:13.192222  301437 out.go:179] * Starting "embed-certs-153232" primary control-plane node in "embed-certs-153232" cluster
	I1217 00:43:13.193581  301437 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:43:13.194844  301437 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:43:13.196084  301437 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:43:13.196117  301437 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1217 00:43:13.196147  301437 cache.go:65] Caching tarball of preloaded images
	I1217 00:43:13.196178  301437 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:43:13.196220  301437 preload.go:238] Found /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 00:43:13.196227  301437 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 00:43:13.196313  301437 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/config.json ...
	I1217 00:43:13.216971  301437 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:43:13.217009  301437 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:43:13.217030  301437 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:43:13.217063  301437 start.go:360] acquireMachinesLock for embed-certs-153232: {Name:mkd806ec7efded4ac7bfe60ed725b3bbcfe0e575 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:43:13.217138  301437 start.go:364] duration metric: took 38.193µs to acquireMachinesLock for "embed-certs-153232"
	I1217 00:43:13.217158  301437 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:43:13.217166  301437 fix.go:54] fixHost starting: 
	I1217 00:43:13.217354  301437 cli_runner.go:164] Run: docker container inspect embed-certs-153232 --format={{.State.Status}}
	I1217 00:43:13.237231  301437 fix.go:112] recreateIfNeeded on embed-certs-153232: state=Stopped err=<nil>
	W1217 00:43:13.237264  301437 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:43:13.101382  300419 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:43:13.101449  300419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:43:13.116971  300419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:43:13.130586  300419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:43:13.225729  300419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:43:13.318717  300419 docker.go:234] disabling docker service ...
	I1217 00:43:13.318780  300419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:43:13.334266  300419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:43:13.352470  300419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:43:13.437263  300419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:43:13.525276  300419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:43:13.539199  300419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:43:13.556581  300419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:43:13.556662  300419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:13.566443  300419 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:43:13.566498  300419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:13.576145  300419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:13.586273  300419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:13.596229  300419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:43:13.605168  300419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:13.614524  300419 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:13.622713  300419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:13.631257  300419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:43:13.638535  300419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:43:13.647048  300419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:13.748333  300419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:43:13.908411  300419 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:43:13.908477  300419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:43:13.912591  300419 start.go:564] Will wait 60s for crictl version
	I1217 00:43:13.912642  300419 ssh_runner.go:195] Run: which crictl
	I1217 00:43:13.916604  300419 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:43:13.941483  300419 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:43:13.941540  300419 ssh_runner.go:195] Run: crio --version
	I1217 00:43:13.968259  300419 ssh_runner.go:195] Run: crio --version
	I1217 00:43:13.997467  300419 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1217 00:43:13.998577  300419 cli_runner.go:164] Run: docker network inspect newest-cni-653717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:43:14.015370  300419 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1217 00:43:14.019356  300419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:43:14.030806  300419 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1217 00:43:10.801972  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	W1217 00:43:12.803027  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	I1217 00:43:14.031936  300419 kubeadm.go:884] updating cluster {Name:newest-cni-653717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-653717 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:43:14.032097  300419 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:43:14.032141  300419 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:43:14.063936  300419 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:43:14.063958  300419 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:43:14.064029  300419 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:43:14.087668  300419 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:43:14.087689  300419 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:43:14.087697  300419 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1217 00:43:14.087798  300419 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-653717 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-653717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:43:14.087898  300419 ssh_runner.go:195] Run: crio config
	I1217 00:43:14.133154  300419 cni.go:84] Creating CNI manager for ""
	I1217 00:43:14.133186  300419 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:43:14.133212  300419 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 00:43:14.133243  300419 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-653717 NodeName:newest-cni-653717 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:43:14.133845  300419 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-653717"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:43:14.133919  300419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:43:14.141716  300419 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:43:14.141762  300419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:43:14.149070  300419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1217 00:43:14.161125  300419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:43:14.173036  300419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1217 00:43:14.184726  300419 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:43:14.188144  300419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:43:14.197550  300419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:14.275214  300419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:43:14.299766  300419 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717 for IP: 192.168.94.2
	I1217 00:43:14.299790  300419 certs.go:195] generating shared ca certs ...
	I1217 00:43:14.299808  300419 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:14.300005  300419 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:43:14.300070  300419 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:43:14.300093  300419 certs.go:257] generating profile certs ...
	I1217 00:43:14.300204  300419 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/client.key
	I1217 00:43:14.300278  300419 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.key.17c07d81
	I1217 00:43:14.300344  300419 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.key
	I1217 00:43:14.300489  300419 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:43:14.300535  300419 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:43:14.300550  300419 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:43:14.300597  300419 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:43:14.300636  300419 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:43:14.300673  300419 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:43:14.300732  300419 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:43:14.301517  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:43:14.321493  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:43:14.339778  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:43:14.358392  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:43:14.381070  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:43:14.399416  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 00:43:14.416061  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:43:14.432412  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/newest-cni-653717/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:43:14.448688  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:43:14.465560  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:43:14.482118  300419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:43:14.500928  300419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:43:14.512899  300419 ssh_runner.go:195] Run: openssl version
	I1217 00:43:14.518834  300419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:43:14.525797  300419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:43:14.532879  300419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:43:14.536295  300419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:43:14.536344  300419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	I1217 00:43:14.570454  300419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:43:14.578196  300419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:14.585485  300419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:43:14.592558  300419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:14.595934  300419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:14.595973  300419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:14.630063  300419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:43:14.638031  300419 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16354.pem
	I1217 00:43:14.645344  300419 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16354.pem /etc/ssl/certs/16354.pem
	I1217 00:43:14.652719  300419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16354.pem
	I1217 00:43:14.656452  300419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:13 /usr/share/ca-certificates/16354.pem
	I1217 00:43:14.656491  300419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16354.pem
	I1217 00:43:14.691440  300419 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:43:14.698802  300419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:43:14.702646  300419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:43:14.735879  300419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:43:14.770301  300419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:43:14.809543  300419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:43:14.854438  300419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:43:14.901721  300419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:43:14.954325  300419 kubeadm.go:401] StartCluster: {Name:newest-cni-653717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-653717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:43:14.954434  300419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:43:14.954495  300419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:43:14.989680  300419 cri.go:89] found id: "de646b7d108062a9c689d337e628f48d29162ce37e015f70f4dfde0b63fd7fe1"
	I1217 00:43:14.989705  300419 cri.go:89] found id: "f155d5d25fa50fc257ecd4b7da29e9d818c2cfa5f80f0f2c2dfa23e5b3025e69"
	I1217 00:43:14.989713  300419 cri.go:89] found id: "097160c44a70ef0edf501027a68baa41f06ea618f8b735835c64eb3b3c78f426"
	I1217 00:43:14.989719  300419 cri.go:89] found id: "608d066efbe101d85b2f7a5a7b16d1ad974b66b117a0796b4196b6e3e5f4c30a"
	I1217 00:43:14.989725  300419 cri.go:89] found id: ""
	I1217 00:43:14.989778  300419 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 00:43:15.002543  300419 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:43:15Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:43:15.002606  300419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:43:15.010245  300419 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:43:15.010268  300419 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:43:15.010304  300419 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:43:15.017281  300419 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:43:15.018086  300419 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-653717" does not appear in /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:43:15.018523  300419 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-12816/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-653717" cluster setting kubeconfig missing "newest-cni-653717" context setting]
	I1217 00:43:15.019385  300419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:15.021191  300419 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:43:15.028401  300419 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1217 00:43:15.028428  300419 kubeadm.go:602] duration metric: took 18.153636ms to restartPrimaryControlPlane
	I1217 00:43:15.028437  300419 kubeadm.go:403] duration metric: took 74.123324ms to StartCluster
	I1217 00:43:15.028450  300419 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:15.028515  300419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:43:15.029409  300419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:15.029593  300419 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:43:15.029661  300419 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:43:15.029760  300419 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-653717"
	I1217 00:43:15.029776  300419 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-653717"
	W1217 00:43:15.029784  300419 addons.go:248] addon storage-provisioner should already be in state true
	I1217 00:43:15.029777  300419 addons.go:70] Setting dashboard=true in profile "newest-cni-653717"
	I1217 00:43:15.029804  300419 addons.go:239] Setting addon dashboard=true in "newest-cni-653717"
	I1217 00:43:15.029810  300419 host.go:66] Checking if "newest-cni-653717" exists ...
	I1217 00:43:15.029803  300419 addons.go:70] Setting default-storageclass=true in profile "newest-cni-653717"
	I1217 00:43:15.029832  300419 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-653717"
	W1217 00:43:15.029814  300419 addons.go:248] addon dashboard should already be in state true
	I1217 00:43:15.029895  300419 host.go:66] Checking if "newest-cni-653717" exists ...
	I1217 00:43:15.030178  300419 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:43:15.029785  300419 config.go:182] Loaded profile config "newest-cni-653717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:43:15.030316  300419 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:43:15.030406  300419 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:43:15.033423  300419 out.go:179] * Verifying Kubernetes components...
	I1217 00:43:15.034444  300419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:15.055791  300419 addons.go:239] Setting addon default-storageclass=true in "newest-cni-653717"
	W1217 00:43:15.056419  300419 addons.go:248] addon default-storageclass should already be in state true
	I1217 00:43:15.056521  300419 host.go:66] Checking if "newest-cni-653717" exists ...
	I1217 00:43:15.057770  300419 cli_runner.go:164] Run: docker container inspect newest-cni-653717 --format={{.State.Status}}
	I1217 00:43:15.058417  300419 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 00:43:15.058417  300419 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:43:15.059574  300419 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:43:15.059606  300419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:43:15.059653  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:15.059575  300419 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 00:43:15.060919  300419 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 00:43:15.060937  300419 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 00:43:15.061206  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:15.094486  300419 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:43:15.094507  300419 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:43:15.094581  300419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653717
	I1217 00:43:15.096483  300419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:43:15.101728  300419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:43:15.120293  300419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/newest-cni-653717/id_rsa Username:docker}
	I1217 00:43:15.176431  300419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:43:15.188738  300419 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:43:15.188802  300419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:15.202898  300419 api_server.go:72] duration metric: took 173.270552ms to wait for apiserver process to appear ...
	I1217 00:43:15.202924  300419 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:43:15.202948  300419 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 00:43:15.205761  300419 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 00:43:15.205784  300419 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 00:43:15.211796  300419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:43:15.220428  300419 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 00:43:15.220451  300419 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 00:43:15.225893  300419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:43:15.235805  300419 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 00:43:15.235823  300419 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 00:43:15.248794  300419 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 00:43:15.248822  300419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 00:43:15.261412  300419 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 00:43:15.261432  300419 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 00:43:15.274541  300419 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 00:43:15.274566  300419 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 00:43:15.287628  300419 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 00:43:15.287647  300419 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 00:43:15.302277  300419 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 00:43:15.302302  300419 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 00:43:15.314969  300419 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 00:43:15.315021  300419 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 00:43:15.327529  300419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 00:43:16.506889  300419 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 00:43:16.506915  300419 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 00:43:16.506930  300419 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 00:43:16.512154  300419 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 00:43:16.512199  300419 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 00:43:16.704136  300419 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 00:43:16.708886  300419 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 00:43:16.708913  300419 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 00:43:17.039543  300419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.827715953s)
	I1217 00:43:17.039628  300419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.813707626s)
	I1217 00:43:17.039738  300419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.712174331s)
	I1217 00:43:17.041405  300419 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-653717 addons enable metrics-server
	
	I1217 00:43:17.050070  300419 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1217 00:43:17.051118  300419 addons.go:530] duration metric: took 2.02146311s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 00:43:17.203089  300419 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 00:43:17.207726  300419 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 00:43:17.207752  300419 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 00:43:17.703086  300419 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 00:43:17.708106  300419 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1217 00:43:17.709199  300419 api_server.go:141] control plane version: v1.35.0-beta.0
	I1217 00:43:17.709228  300419 api_server.go:131] duration metric: took 2.506295852s to wait for apiserver health ...
	I1217 00:43:17.709241  300419 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:43:17.712970  300419 system_pods.go:59] 8 kube-system pods found
	I1217 00:43:17.713053  300419 system_pods.go:61] "coredns-7d764666f9-djwjl" [741342b4-626d-4282-ba19-0e8b37eb2556] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 00:43:17.713080  300419 system_pods.go:61] "etcd-newest-cni-653717" [8210d4d5-f66f-43fe-b160-e85265f0dcd0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 00:43:17.713097  300419 system_pods.go:61] "kindnet-xmw8c" [7688d3d1-e8d9-4b27-bd63-412f8972c114] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 00:43:17.713109  300419 system_pods.go:61] "kube-apiserver-newest-cni-653717" [2a8f1a0d-5c29-49c7-b857-e82bc22e048f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 00:43:17.713130  300419 system_pods.go:61] "kube-controller-manager-newest-cni-653717" [d368a2d6-d0bf-4119-982a-d08d313d1433] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 00:43:17.713142  300419 system_pods.go:61] "kube-proxy-9jd8t" [e7d2bcca-b703-4fd2-9af0-c08825a47e85] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 00:43:17.713152  300419 system_pods.go:61] "kube-scheduler-newest-cni-653717" [f17c94c7-8363-4f0d-a31c-6db9a2b0f14c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 00:43:17.713163  300419 system_pods.go:61] "storage-provisioner" [e5c636ed-8536-4f92-8033-757cda2e5a8e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 00:43:17.713175  300419 system_pods.go:74] duration metric: took 3.926365ms to wait for pod list to return data ...
	I1217 00:43:17.713187  300419 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:43:17.715539  300419 default_sa.go:45] found service account: "default"
	I1217 00:43:17.715559  300419 default_sa.go:55] duration metric: took 2.36225ms for default service account to be created ...
	I1217 00:43:17.715572  300419 kubeadm.go:587] duration metric: took 2.68594841s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 00:43:17.715594  300419 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:43:17.717862  300419 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 00:43:17.717894  300419 node_conditions.go:123] node cpu capacity is 8
	I1217 00:43:17.717911  300419 node_conditions.go:105] duration metric: took 2.311759ms to run NodePressure ...
	I1217 00:43:17.717927  300419 start.go:242] waiting for startup goroutines ...
	I1217 00:43:17.717941  300419 start.go:247] waiting for cluster config update ...
	I1217 00:43:17.717955  300419 start.go:256] writing updated cluster config ...
	I1217 00:43:17.718209  300419 ssh_runner.go:195] Run: rm -f paused
	I1217 00:43:17.778087  300419 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1217 00:43:17.780314  300419 out.go:179] * Done! kubectl is now configured to use "newest-cni-653717" cluster and "default" namespace by default
	I1217 00:43:13.239674  301437 out.go:252] * Restarting existing docker container for "embed-certs-153232" ...
	I1217 00:43:13.239739  301437 cli_runner.go:164] Run: docker start embed-certs-153232
	I1217 00:43:13.518313  301437 cli_runner.go:164] Run: docker container inspect embed-certs-153232 --format={{.State.Status}}
	I1217 00:43:13.538713  301437 kic.go:430] container "embed-certs-153232" state is running.
	I1217 00:43:13.539243  301437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-153232
	I1217 00:43:13.561482  301437 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/config.json ...
	I1217 00:43:13.561678  301437 machine.go:94] provisionDockerMachine start ...
	I1217 00:43:13.561743  301437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:43:13.581191  301437 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:13.581560  301437 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1217 00:43:13.581584  301437 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:43:13.582216  301437 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45578->127.0.0.1:33098: read: connection reset by peer
	I1217 00:43:16.730553  301437 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-153232
	
	I1217 00:43:16.730580  301437 ubuntu.go:182] provisioning hostname "embed-certs-153232"
	I1217 00:43:16.730642  301437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:43:16.752281  301437 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:16.752586  301437 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1217 00:43:16.752607  301437 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-153232 && echo "embed-certs-153232" | sudo tee /etc/hostname
	I1217 00:43:16.896361  301437 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-153232
	
	I1217 00:43:16.896439  301437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:43:16.920667  301437 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:16.920984  301437 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1217 00:43:16.921041  301437 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-153232' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-153232/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-153232' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:43:17.053232  301437 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:43:17.053276  301437 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:43:17.053302  301437 ubuntu.go:190] setting up certificates
	I1217 00:43:17.053314  301437 provision.go:84] configureAuth start
	I1217 00:43:17.053376  301437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-153232
	I1217 00:43:17.072569  301437 provision.go:143] copyHostCerts
	I1217 00:43:17.072650  301437 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:43:17.072666  301437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:43:17.072747  301437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:43:17.072921  301437 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:43:17.072932  301437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:43:17.072976  301437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:43:17.073078  301437 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:43:17.073090  301437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:43:17.073128  301437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:43:17.073199  301437 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.embed-certs-153232 san=[127.0.0.1 192.168.85.2 embed-certs-153232 localhost minikube]
	I1217 00:43:17.178876  301437 provision.go:177] copyRemoteCerts
	I1217 00:43:17.178931  301437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:43:17.178961  301437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:43:17.196660  301437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/embed-certs-153232/id_rsa Username:docker}
	I1217 00:43:17.289862  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:43:17.307299  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1217 00:43:17.324420  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:43:17.341134  301437 provision.go:87] duration metric: took 287.803695ms to configureAuth
	I1217 00:43:17.341160  301437 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:43:17.341354  301437 config.go:182] Loaded profile config "embed-certs-153232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:17.341461  301437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:43:17.358979  301437 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:17.359220  301437 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1217 00:43:17.359239  301437 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:43:17.727983  301437 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:43:17.728020  301437 machine.go:97] duration metric: took 4.166327334s to provisionDockerMachine
	I1217 00:43:17.728033  301437 start.go:293] postStartSetup for "embed-certs-153232" (driver="docker")
	I1217 00:43:17.728045  301437 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:43:17.728101  301437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:43:17.728142  301437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:43:17.748654  301437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/embed-certs-153232/id_rsa Username:docker}
	I1217 00:43:17.846509  301437 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:43:17.850221  301437 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:43:17.850243  301437 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:43:17.850252  301437 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:43:17.850296  301437 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:43:17.850370  301437 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:43:17.850481  301437 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:43:17.857806  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:43:17.875672  301437 start.go:296] duration metric: took 147.626477ms for postStartSetup
	I1217 00:43:17.875743  301437 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:43:17.875789  301437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:43:17.896453  301437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/embed-certs-153232/id_rsa Username:docker}
	I1217 00:43:17.988329  301437 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:43:17.992617  301437 fix.go:56] duration metric: took 4.775446798s for fixHost
	I1217 00:43:17.992636  301437 start.go:83] releasing machines lock for "embed-certs-153232", held for 4.77548866s
	I1217 00:43:17.992701  301437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-153232
	I1217 00:43:18.011378  301437 ssh_runner.go:195] Run: cat /version.json
	I1217 00:43:18.011453  301437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:43:18.011508  301437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:43:18.011572  301437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:43:18.031047  301437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/embed-certs-153232/id_rsa Username:docker}
	I1217 00:43:18.031364  301437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/embed-certs-153232/id_rsa Username:docker}
	I1217 00:43:18.124007  301437 ssh_runner.go:195] Run: systemctl --version
	I1217 00:43:18.191518  301437 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:43:18.231195  301437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:43:18.236329  301437 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:43:18.236388  301437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:43:18.244880  301437 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:43:18.244899  301437 start.go:496] detecting cgroup driver to use...
	I1217 00:43:18.244930  301437 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:43:18.244971  301437 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:43:18.259621  301437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:43:18.272633  301437 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:43:18.272687  301437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:43:18.286128  301437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:43:18.298271  301437 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:43:18.395785  301437 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:43:18.493494  301437 docker.go:234] disabling docker service ...
	I1217 00:43:18.493559  301437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:43:18.509760  301437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:43:18.524655  301437 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:43:18.623023  301437 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:43:18.710293  301437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:43:18.723406  301437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:43:18.736848  301437 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:43:18.736910  301437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:18.745575  301437 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:43:18.745628  301437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:18.754055  301437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:18.762572  301437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:18.771966  301437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:43:18.779835  301437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:18.788694  301437 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:18.797626  301437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:18.807631  301437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:43:18.815247  301437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:43:18.823739  301437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:18.904870  301437 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:43:19.049353  301437 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:43:19.049410  301437 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:43:19.053190  301437 start.go:564] Will wait 60s for crictl version
	I1217 00:43:19.053239  301437 ssh_runner.go:195] Run: which crictl
	I1217 00:43:19.056644  301437 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:43:19.081878  301437 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:43:19.081954  301437 ssh_runner.go:195] Run: crio --version
	I1217 00:43:19.108104  301437 ssh_runner.go:195] Run: crio --version
	I1217 00:43:19.137756  301437 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1217 00:43:14.803418  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	W1217 00:43:16.804199  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	W1217 00:43:19.303496  290128 pod_ready.go:104] pod "coredns-7d764666f9-6ql6r" is not "Ready", error: <nil>
	I1217 00:43:19.138849  301437 cli_runner.go:164] Run: docker network inspect embed-certs-153232 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:43:19.156769  301437 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 00:43:19.160695  301437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:43:19.171432  301437 kubeadm.go:884] updating cluster {Name:embed-certs-153232 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-153232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:43:19.171600  301437 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:43:19.171663  301437 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:43:19.201717  301437 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:43:19.201761  301437 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:43:19.201804  301437 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:43:19.230225  301437 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:43:19.230250  301437 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:43:19.230261  301437 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1217 00:43:19.230403  301437 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-153232 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-153232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:43:19.230488  301437 ssh_runner.go:195] Run: crio config
	I1217 00:43:19.285651  301437 cni.go:84] Creating CNI manager for ""
	I1217 00:43:19.285678  301437 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:43:19.285695  301437 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:43:19.285721  301437 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-153232 NodeName:embed-certs-153232 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:43:19.285887  301437 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-153232"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:43:19.285957  301437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 00:43:19.293949  301437 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:43:19.294013  301437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:43:19.301937  301437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1217 00:43:19.314092  301437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 00:43:19.326389  301437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1217 00:43:19.338404  301437 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:43:19.342368  301437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:43:19.352249  301437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:19.450887  301437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:43:19.476061  301437 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232 for IP: 192.168.85.2
	I1217 00:43:19.476081  301437 certs.go:195] generating shared ca certs ...
	I1217 00:43:19.476102  301437 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:19.476257  301437 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:43:19.476315  301437 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:43:19.476328  301437 certs.go:257] generating profile certs ...
	I1217 00:43:19.476450  301437 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/client.key
	I1217 00:43:19.476538  301437 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/apiserver.key.9c5b6ce4
	I1217 00:43:19.476607  301437 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/proxy-client.key
	I1217 00:43:19.476783  301437 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:43:19.476845  301437 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:43:19.476859  301437 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:43:19.476896  301437 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:43:19.476938  301437 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:43:19.476980  301437 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:43:19.477065  301437 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:43:19.477864  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:43:19.496400  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:43:19.514827  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:43:19.534257  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:43:19.558896  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1217 00:43:19.577374  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:43:19.593935  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:43:19.610639  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/embed-certs-153232/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:43:19.629006  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:43:19.656848  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:43:19.675161  301437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:43:19.691709  301437 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:43:19.708839  301437 ssh_runner.go:195] Run: openssl version
	I1217 00:43:19.714839  301437 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:19.723416  301437 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:43:19.733187  301437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:19.738445  301437 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:19.738503  301437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:19.791538  301437 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:43:19.799763  301437 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16354.pem
	I1217 00:43:19.808195  301437 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16354.pem /etc/ssl/certs/16354.pem
	I1217 00:43:19.815917  301437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16354.pem
	I1217 00:43:19.820648  301437 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:13 /usr/share/ca-certificates/16354.pem
	I1217 00:43:19.820700  301437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16354.pem
	I1217 00:43:19.862242  301437 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:43:19.871343  301437 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:43:19.879739  301437 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:43:19.887242  301437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:43:19.890787  301437 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:43:19.890832  301437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	I1217 00:43:19.926488  301437 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:43:19.934257  301437 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:43:19.937849  301437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:43:19.974804  301437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:43:20.020581  301437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:43:20.062588  301437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:43:20.122228  301437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:43:20.176358  301437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:43:20.211282  301437 kubeadm.go:401] StartCluster: {Name:embed-certs-153232 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-153232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:43:20.211381  301437 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:43:20.211436  301437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:43:20.244491  301437 cri.go:89] found id: "dadde2213b8a894873343cf42602c1bedb001a3311bd9672a69d0fa4a07d9786"
	I1217 00:43:20.244513  301437 cri.go:89] found id: "117e1e782a79833091ca7f1a9da4be915158517d3d54c5674f3b4e0875f18cce"
	I1217 00:43:20.244519  301437 cri.go:89] found id: "f3a000d40d6d7ebc54a27ecd08dc5aa3b530c6e66b7327ec3ec09941fca5d2ce"
	I1217 00:43:20.244523  301437 cri.go:89] found id: "a770bc08061f975f567cb7fb7cec6883ec6d5215d19863d7ddb2cc0049571d8b"
	I1217 00:43:20.244527  301437 cri.go:89] found id: ""
	I1217 00:43:20.244573  301437 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 00:43:20.256662  301437 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:43:20Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:43:20.256756  301437 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:43:20.265068  301437 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:43:20.265087  301437 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:43:20.265131  301437 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:43:20.274191  301437 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:43:20.275439  301437 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-153232" does not appear in /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:43:20.276139  301437 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-12816/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-153232" cluster setting kubeconfig missing "embed-certs-153232" context setting]
	I1217 00:43:20.277088  301437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:20.279505  301437 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:43:20.288899  301437 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1217 00:43:20.288932  301437 kubeadm.go:602] duration metric: took 23.834476ms to restartPrimaryControlPlane
	I1217 00:43:20.288942  301437 kubeadm.go:403] duration metric: took 77.669136ms to StartCluster
	I1217 00:43:20.288957  301437 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:20.289043  301437 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:43:20.291231  301437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:20.291470  301437 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:43:20.291526  301437 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:43:20.291628  301437 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-153232"
	I1217 00:43:20.291649  301437 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-153232"
	W1217 00:43:20.291661  301437 addons.go:248] addon storage-provisioner should already be in state true
	I1217 00:43:20.291657  301437 addons.go:70] Setting dashboard=true in profile "embed-certs-153232"
	I1217 00:43:20.291679  301437 addons.go:239] Setting addon dashboard=true in "embed-certs-153232"
	W1217 00:43:20.291691  301437 addons.go:248] addon dashboard should already be in state true
	I1217 00:43:20.291692  301437 host.go:66] Checking if "embed-certs-153232" exists ...
	I1217 00:43:20.291706  301437 config.go:182] Loaded profile config "embed-certs-153232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:20.291718  301437 addons.go:70] Setting default-storageclass=true in profile "embed-certs-153232"
	I1217 00:43:20.291738  301437 host.go:66] Checking if "embed-certs-153232" exists ...
	I1217 00:43:20.291753  301437 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-153232"
	I1217 00:43:20.292122  301437 cli_runner.go:164] Run: docker container inspect embed-certs-153232 --format={{.State.Status}}
	I1217 00:43:20.292245  301437 cli_runner.go:164] Run: docker container inspect embed-certs-153232 --format={{.State.Status}}
	I1217 00:43:20.292252  301437 cli_runner.go:164] Run: docker container inspect embed-certs-153232 --format={{.State.Status}}
	I1217 00:43:20.294162  301437 out.go:179] * Verifying Kubernetes components...
	I1217 00:43:20.295150  301437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:20.318342  301437 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:43:20.318343  301437 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 00:43:20.319356  301437 addons.go:239] Setting addon default-storageclass=true in "embed-certs-153232"
	W1217 00:43:20.319377  301437 addons.go:248] addon default-storageclass should already be in state true
	I1217 00:43:20.319402  301437 host.go:66] Checking if "embed-certs-153232" exists ...
	I1217 00:43:20.319532  301437 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:43:20.319546  301437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:43:20.319598  301437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:43:20.319845  301437 cli_runner.go:164] Run: docker container inspect embed-certs-153232 --format={{.State.Status}}
	I1217 00:43:20.321455  301437 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 00:43:20.322471  301437 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 00:43:20.322495  301437 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 00:43:20.322568  301437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:43:20.350391  301437 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:43:20.350415  301437 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:43:20.350476  301437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:43:20.350698  301437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/embed-certs-153232/id_rsa Username:docker}
	I1217 00:43:20.360061  301437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/embed-certs-153232/id_rsa Username:docker}
	I1217 00:43:20.377397  301437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/embed-certs-153232/id_rsa Username:docker}
	I1217 00:43:20.458655  301437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:43:20.474126  301437 node_ready.go:35] waiting up to 6m0s for node "embed-certs-153232" to be "Ready" ...
	I1217 00:43:20.485974  301437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:43:20.510162  301437 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 00:43:20.510189  301437 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 00:43:20.512807  301437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:43:20.542649  301437 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 00:43:20.542742  301437 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 00:43:20.587712  301437 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 00:43:20.587738  301437 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 00:43:20.605651  301437 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 00:43:20.605678  301437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 00:43:20.620294  301437 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 00:43:20.620318  301437 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 00:43:20.634290  301437 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 00:43:20.634312  301437 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 00:43:20.647497  301437 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 00:43:20.647516  301437 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 00:43:20.661355  301437 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 00:43:20.661392  301437 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 00:43:20.679377  301437 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 00:43:20.679400  301437 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 00:43:20.694254  301437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 00:43:21.752174  301437 node_ready.go:49] node "embed-certs-153232" is "Ready"
	I1217 00:43:21.752213  301437 node_ready.go:38] duration metric: took 1.278054903s for node "embed-certs-153232" to be "Ready" ...
	I1217 00:43:21.752231  301437 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:43:21.752288  301437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:22.314802  301437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.828796721s)
	I1217 00:43:22.314865  301437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.802027348s)
	I1217 00:43:22.315080  301437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.620787139s)
	I1217 00:43:22.315127  301437 api_server.go:72] duration metric: took 2.023626853s to wait for apiserver process to appear ...
	I1217 00:43:22.315147  301437 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:43:22.315211  301437 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 00:43:22.316461  301437 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-153232 addons enable metrics-server
	
	I1217 00:43:22.322339  301437 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 00:43:22.322362  301437 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 00:43:22.327577  301437 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1217 00:43:22.330339  301437 addons.go:530] duration metric: took 2.038821808s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 00:43:22.815737  301437 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 00:43:22.821436  301437 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 00:43:22.821463  301437 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	
	
	==> CRI-O <==
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.676364961Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.679598075Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=970c12eb-d302-4902-982a-e17fa7460ebd name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.680093257Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=b5e3616c-d6d5-4f9b-974a-5a799e82d59c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.68095013Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.68158869Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.681768467Z" level=info msg="Ran pod sandbox 69f7c5ef1f5464482d5deaa8b03d5e7d7cba099fbe94cfe84e602f763a11268c with infra container: kube-system/kindnet-xmw8c/POD" id=970c12eb-d302-4902-982a-e17fa7460ebd name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.682518702Z" level=info msg="Ran pod sandbox e070405644c0a834a9e1609e11c5b5d30e0cb39c5148560aad953a9f5de4ce88 with infra container: kube-system/kube-proxy-9jd8t/POD" id=b5e3616c-d6d5-4f9b-974a-5a799e82d59c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.682852572Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f2f5b42c-2df0-404e-b106-b340569884ff name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.68352994Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=fe67e68a-058e-4430-92b9-2b5a8feefd8d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.683800439Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=70895c75-d02b-421e-83ec-42593dd42d3f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.684470528Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=84f8e91c-5e5b-4740-b1a5-884c78a9e74a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.684822501Z" level=info msg="Creating container: kube-system/kindnet-xmw8c/kindnet-cni" id=91eca6c8-378b-432b-830d-82f7ead610ea name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.685118698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.685624609Z" level=info msg="Creating container: kube-system/kube-proxy-9jd8t/kube-proxy" id=9f3ed6f9-e074-4d7b-a7a7-ab8e1c19f401 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.685795434Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.690071673Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.690504708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.692382945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.692972469Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.718531045Z" level=info msg="Created container 053bf92e9a5f9a52b4b6ab67762abf5f59e6048f064f1e157e90fe6500e59fb1: kube-system/kindnet-xmw8c/kindnet-cni" id=91eca6c8-378b-432b-830d-82f7ead610ea name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.719216413Z" level=info msg="Starting container: 053bf92e9a5f9a52b4b6ab67762abf5f59e6048f064f1e157e90fe6500e59fb1" id=fafb20f4-1f0e-4b24-8041-9a1f9e21e943 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.720666623Z" level=info msg="Created container 09db06ede95c5fa90e1d8add618ad4ec6e4856dc88d34335e62d7d1b21d156f1: kube-system/kube-proxy-9jd8t/kube-proxy" id=9f3ed6f9-e074-4d7b-a7a7-ab8e1c19f401 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.721148259Z" level=info msg="Started container" PID=1057 containerID=053bf92e9a5f9a52b4b6ab67762abf5f59e6048f064f1e157e90fe6500e59fb1 description=kube-system/kindnet-xmw8c/kindnet-cni id=fafb20f4-1f0e-4b24-8041-9a1f9e21e943 name=/runtime.v1.RuntimeService/StartContainer sandboxID=69f7c5ef1f5464482d5deaa8b03d5e7d7cba099fbe94cfe84e602f763a11268c
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.721229635Z" level=info msg="Starting container: 09db06ede95c5fa90e1d8add618ad4ec6e4856dc88d34335e62d7d1b21d156f1" id=892a0b7d-8e33-49fb-9719-fc708637f597 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:43:17 newest-cni-653717 crio[524]: time="2025-12-17T00:43:17.724172507Z" level=info msg="Started container" PID=1058 containerID=09db06ede95c5fa90e1d8add618ad4ec6e4856dc88d34335e62d7d1b21d156f1 description=kube-system/kube-proxy-9jd8t/kube-proxy id=892a0b7d-8e33-49fb-9719-fc708637f597 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e070405644c0a834a9e1609e11c5b5d30e0cb39c5148560aad953a9f5de4ce88
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	09db06ede95c5       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   6 seconds ago       Running             kube-proxy                1                   e070405644c0a       kube-proxy-9jd8t                            kube-system
	053bf92e9a5f9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   69f7c5ef1f546       kindnet-xmw8c                               kube-system
	de646b7d10806       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   8 seconds ago       Running             kube-apiserver            1                   ec71189f7fad3       kube-apiserver-newest-cni-653717            kube-system
	f155d5d25fa50       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   8 seconds ago       Running             kube-controller-manager   1                   19550e8e69a45       kube-controller-manager-newest-cni-653717   kube-system
	097160c44a70e       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   8 seconds ago       Running             etcd                      1                   2aaad1633c885       etcd-newest-cni-653717                      kube-system
	608d066efbe10       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   8 seconds ago       Running             kube-scheduler            1                   e914c561da2a6       kube-scheduler-newest-cni-653717            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-653717
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-653717
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=newest-cni-653717
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_42_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:42:54 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-653717
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:43:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:43:16 +0000   Wed, 17 Dec 2025 00:42:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:43:16 +0000   Wed, 17 Dec 2025 00:42:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:43:16 +0000   Wed, 17 Dec 2025 00:42:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 17 Dec 2025 00:43:16 +0000   Wed, 17 Dec 2025 00:42:52 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-653717
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                50a52395-faf4-409e-a6ea-aa486ab479f3
	  Boot ID:                    0e9cedc6-c46e-4354-b3d2-9272a8b33ae5
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-653717                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         27s
	  kube-system                 kindnet-xmw8c                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22s
	  kube-system                 kube-apiserver-newest-cni-653717             250m (3%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-newest-cni-653717    200m (2%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-9jd8t                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-scheduler-newest-cni-653717             100m (1%)     0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  22s   node-controller  Node newest-cni-653717 event: Registered Node newest-cni-653717 in Controller
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-653717 event: Registered Node newest-cni-653717 in Controller
	
	
	==> dmesg <==
	[  +0.089382] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024236] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.864694] kauditd_printk_skb: 47 callbacks suppressed
	[Dec17 00:07] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.006904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +2.048755] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +4.030595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +8.447143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[ +16.382404] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000015] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[Dec17 00:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	
	
	==> etcd [097160c44a70ef0edf501027a68baa41f06ea618f8b735835c64eb3b3c78f426] <==
	{"level":"warn","ts":"2025-12-17T00:43:15.918148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:15.928902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:15.935185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:15.942716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:15.948848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:15.955608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:15.961536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:15.968517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:15.974806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:15.987185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:15.993204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.000232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.006193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.012438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.018541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.024685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.031433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.037627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.043871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.050824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.065880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.072288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.079928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.087893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:16.140539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46458","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:43:23 up  1:25,  0 user,  load average: 3.63, 2.89, 1.96
	Linux newest-cni-653717 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [053bf92e9a5f9a52b4b6ab67762abf5f59e6048f064f1e157e90fe6500e59fb1] <==
	I1217 00:43:17.932471       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 00:43:17.932700       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1217 00:43:17.932816       1 main.go:148] setting mtu 1500 for CNI 
	I1217 00:43:17.932836       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 00:43:17.932853       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T00:43:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 00:43:18.135027       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 00:43:18.135079       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 00:43:18.135094       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 00:43:18.135226       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 00:43:18.435537       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 00:43:18.435595       1 metrics.go:72] Registering metrics
	I1217 00:43:18.435702       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [de646b7d108062a9c689d337e628f48d29162ce37e015f70f4dfde0b63fd7fe1] <==
	I1217 00:43:16.589361       1 aggregator.go:187] initial CRD sync complete...
	I1217 00:43:16.589391       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 00:43:16.589398       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 00:43:16.589405       1 cache.go:39] Caches are synced for autoregister controller
	E1217 00:43:16.589782       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 00:43:16.590051       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 00:43:16.590070       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 00:43:16.594192       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 00:43:16.602071       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 00:43:16.606372       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:16.606390       1 policy_source.go:248] refreshing policies
	I1217 00:43:16.606408       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 00:43:16.621657       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 00:43:16.858691       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 00:43:16.884148       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 00:43:16.902105       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 00:43:16.908038       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 00:43:16.913481       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 00:43:16.943070       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.105.94"}
	I1217 00:43:16.955291       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.9.195"}
	I1217 00:43:17.486342       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 00:43:20.126573       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 00:43:20.275418       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 00:43:20.380117       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 00:43:20.451512       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [f155d5d25fa50fc257ecd4b7da29e9d818c2cfa5f80f0f2c2dfa23e5b3025e69] <==
	I1217 00:43:19.730225       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.730915       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.730975       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.731066       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.731117       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.731153       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.731118       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.731174       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.731126       1 range_allocator.go:177] "Sending events to api server"
	I1217 00:43:19.731105       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.731134       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.731227       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.731244       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1217 00:43:19.731250       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:43:19.731255       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.731259       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1217 00:43:19.731144       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.731319       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-653717"
	I1217 00:43:19.731385       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1217 00:43:19.735701       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:43:19.736784       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.831512       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:19.831535       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 00:43:19.831539       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 00:43:19.836889       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [09db06ede95c5fa90e1d8add618ad4ec6e4856dc88d34335e62d7d1b21d156f1] <==
	I1217 00:43:17.765521       1 server_linux.go:53] "Using iptables proxy"
	I1217 00:43:17.834345       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:43:17.934770       1 shared_informer.go:377] "Caches are synced"
	I1217 00:43:17.934816       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1217 00:43:17.935085       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 00:43:17.953144       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 00:43:17.953212       1 server_linux.go:136] "Using iptables Proxier"
	I1217 00:43:17.958001       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 00:43:17.958355       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1217 00:43:17.958379       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:43:17.959828       1 config.go:106] "Starting endpoint slice config controller"
	I1217 00:43:17.959865       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 00:43:17.959869       1 config.go:200] "Starting service config controller"
	I1217 00:43:17.959890       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 00:43:17.959929       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 00:43:17.959937       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 00:43:17.960005       1 config.go:309] "Starting node config controller"
	I1217 00:43:17.960017       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 00:43:17.960028       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 00:43:18.060910       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 00:43:18.060951       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 00:43:18.060951       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [608d066efbe101d85b2f7a5a7b16d1ad974b66b117a0796b4196b6e3e5f4c30a] <==
	I1217 00:43:15.222980       1 serving.go:386] Generated self-signed cert in-memory
	W1217 00:43:16.508440       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 00:43:16.508473       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 00:43:16.508484       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 00:43:16.508493       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 00:43:16.537748       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1217 00:43:16.537780       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:43:16.540607       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 00:43:16.540784       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:43:16.541036       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:43:16.541827       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 00:43:16.641559       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 00:43:16 newest-cni-653717 kubelet[677]: I1217 00:43:16.685583     677 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-653717"
	Dec 17 00:43:16 newest-cni-653717 kubelet[677]: E1217 00:43:16.691129     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-653717\" already exists" pod="kube-system/etcd-newest-cni-653717"
	Dec 17 00:43:16 newest-cni-653717 kubelet[677]: I1217 00:43:16.691157     677 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-653717"
	Dec 17 00:43:16 newest-cni-653717 kubelet[677]: E1217 00:43:16.698796     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-653717\" already exists" pod="kube-system/kube-apiserver-newest-cni-653717"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: I1217 00:43:17.365129     677 apiserver.go:52] "Watching apiserver"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: E1217 00:43:17.369831     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-653717" containerName="kube-controller-manager"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: E1217 00:43:17.404920     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-653717" containerName="kube-apiserver"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: I1217 00:43:17.405086     677 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-653717"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: I1217 00:43:17.405237     677 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-653717"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: E1217 00:43:17.411040     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-653717\" already exists" pod="kube-system/kube-scheduler-newest-cni-653717"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: E1217 00:43:17.411272     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-653717" containerName="kube-scheduler"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: E1217 00:43:17.411790     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-653717\" already exists" pod="kube-system/etcd-newest-cni-653717"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: E1217 00:43:17.411872     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-653717" containerName="etcd"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: I1217 00:43:17.469851     677 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: I1217 00:43:17.532930     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7d2bcca-b703-4fd2-9af0-c08825a47e85-lib-modules\") pod \"kube-proxy-9jd8t\" (UID: \"e7d2bcca-b703-4fd2-9af0-c08825a47e85\") " pod="kube-system/kube-proxy-9jd8t"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: I1217 00:43:17.533014     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7688d3d1-e8d9-4b27-bd63-412f8972c114-cni-cfg\") pod \"kindnet-xmw8c\" (UID: \"7688d3d1-e8d9-4b27-bd63-412f8972c114\") " pod="kube-system/kindnet-xmw8c"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: I1217 00:43:17.533089     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7688d3d1-e8d9-4b27-bd63-412f8972c114-lib-modules\") pod \"kindnet-xmw8c\" (UID: \"7688d3d1-e8d9-4b27-bd63-412f8972c114\") " pod="kube-system/kindnet-xmw8c"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: I1217 00:43:17.533149     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7d2bcca-b703-4fd2-9af0-c08825a47e85-xtables-lock\") pod \"kube-proxy-9jd8t\" (UID: \"e7d2bcca-b703-4fd2-9af0-c08825a47e85\") " pod="kube-system/kube-proxy-9jd8t"
	Dec 17 00:43:17 newest-cni-653717 kubelet[677]: I1217 00:43:17.533391     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7688d3d1-e8d9-4b27-bd63-412f8972c114-xtables-lock\") pod \"kindnet-xmw8c\" (UID: \"7688d3d1-e8d9-4b27-bd63-412f8972c114\") " pod="kube-system/kindnet-xmw8c"
	Dec 17 00:43:18 newest-cni-653717 kubelet[677]: E1217 00:43:18.410491     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-653717" containerName="kube-scheduler"
	Dec 17 00:43:18 newest-cni-653717 kubelet[677]: E1217 00:43:18.410585     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-653717" containerName="etcd"
	Dec 17 00:43:18 newest-cni-653717 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 00:43:18 newest-cni-653717 kubelet[677]: I1217 00:43:18.783758     677 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 17 00:43:18 newest-cni-653717 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 00:43:18 newest-cni-653717 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-653717 -n newest-cni-653717
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-653717 -n newest-cni-653717: exit status 2 (349.350779ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-653717 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-djwjl storage-provisioner dashboard-metrics-scraper-867fb5f87b-8kf6f kubernetes-dashboard-b84665fb8-9x2f8
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-653717 describe pod coredns-7d764666f9-djwjl storage-provisioner dashboard-metrics-scraper-867fb5f87b-8kf6f kubernetes-dashboard-b84665fb8-9x2f8
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-653717 describe pod coredns-7d764666f9-djwjl storage-provisioner dashboard-metrics-scraper-867fb5f87b-8kf6f kubernetes-dashboard-b84665fb8-9x2f8: exit status 1 (67.35802ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-djwjl" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-8kf6f" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-9x2f8" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-653717 describe pod coredns-7d764666f9-djwjl storage-provisioner dashboard-metrics-scraper-867fb5f87b-8kf6f kubernetes-dashboard-b84665fb8-9x2f8: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-864613 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-864613 --alsologtostderr -v=1: exit status 80 (2.186466164s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-864613 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:43:38.147643  311182 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:43:38.147987  311182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:43:38.148019  311182 out.go:374] Setting ErrFile to fd 2...
	I1217 00:43:38.148024  311182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:43:38.149651  311182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:43:38.150058  311182 out.go:368] Setting JSON to false
	I1217 00:43:38.150099  311182 mustload.go:66] Loading cluster: no-preload-864613
	I1217 00:43:38.150610  311182 config.go:182] Loaded profile config "no-preload-864613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:43:38.151193  311182 cli_runner.go:164] Run: docker container inspect no-preload-864613 --format={{.State.Status}}
	I1217 00:43:38.170726  311182 host.go:66] Checking if "no-preload-864613" exists ...
	I1217 00:43:38.170976  311182 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:43:38.232430  311182 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:87 SystemTime:2025-12-17 00:43:38.221456773 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:43:38.233187  311182 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-864613 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 00:43:38.235160  311182 out.go:179] * Pausing node no-preload-864613 ... 
	I1217 00:43:38.236299  311182 host.go:66] Checking if "no-preload-864613" exists ...
	I1217 00:43:38.236728  311182 ssh_runner.go:195] Run: systemctl --version
	I1217 00:43:38.236786  311182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-864613
	I1217 00:43:38.258241  311182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/no-preload-864613/id_rsa Username:docker}
	I1217 00:43:38.352129  311182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:43:38.379752  311182 pause.go:52] kubelet running: true
	I1217 00:43:38.379840  311182 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 00:43:38.544800  311182 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 00:43:38.544893  311182 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 00:43:38.616300  311182 cri.go:89] found id: "5587cb88805177c695bf1ec86ad55d11c6c3c94174e7b9d4fd7505596629efb9"
	I1217 00:43:38.616323  311182 cri.go:89] found id: "992ca65d8279bc176a68afbe577e49037ece762e1bbf7e625c4270f35d29840c"
	I1217 00:43:38.616329  311182 cri.go:89] found id: "c168143de300c008ec57bfb6f217961739426196e44b7a3fe545f9f941260c0a"
	I1217 00:43:38.616334  311182 cri.go:89] found id: "40909a37f96e05409eb1c53f56f9585bf17482d70eae48d671deb9c28e8a104c"
	I1217 00:43:38.616338  311182 cri.go:89] found id: "b2af3f621d169db6db7a50be514e4c022a2caa38e1084d576131e2475f388d5d"
	I1217 00:43:38.616343  311182 cri.go:89] found id: "4b34ed74185a723d1987fd893c6b89aa61e85dd77a4391ea83bf44f5d07a0931"
	I1217 00:43:38.616348  311182 cri.go:89] found id: "a590d671bfa52ffb77f09298e606dd5a6cef506d25bf7c749bd516cf65fabaab"
	I1217 00:43:38.616352  311182 cri.go:89] found id: "a12cf220a059b218df62a14f9045f72149c1009f3507c8c36e206fdf43dc9d57"
	I1217 00:43:38.616356  311182 cri.go:89] found id: "d592a6ba05b7b5e2d53ffd9b29510a47348394c0b8faf29e99d49dce869dbeff"
	I1217 00:43:38.616369  311182 cri.go:89] found id: "b3918d1baa01c6eee1e18e913b70130777045f1037ee97fb2baa52d82998123b"
	I1217 00:43:38.616378  311182 cri.go:89] found id: "e446ab4cc7b9aeb434956ba232e1f5873d98c50b20e63779da4e13870a2d7e30"
	I1217 00:43:38.616382  311182 cri.go:89] found id: ""
	I1217 00:43:38.616434  311182 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:43:38.630275  311182 retry.go:31] will retry after 304.865479ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:43:38Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:43:38.935822  311182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:43:38.949573  311182 pause.go:52] kubelet running: false
	I1217 00:43:38.949628  311182 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 00:43:39.118088  311182 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 00:43:39.118183  311182 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 00:43:39.184063  311182 cri.go:89] found id: "5587cb88805177c695bf1ec86ad55d11c6c3c94174e7b9d4fd7505596629efb9"
	I1217 00:43:39.184091  311182 cri.go:89] found id: "992ca65d8279bc176a68afbe577e49037ece762e1bbf7e625c4270f35d29840c"
	I1217 00:43:39.184095  311182 cri.go:89] found id: "c168143de300c008ec57bfb6f217961739426196e44b7a3fe545f9f941260c0a"
	I1217 00:43:39.184099  311182 cri.go:89] found id: "40909a37f96e05409eb1c53f56f9585bf17482d70eae48d671deb9c28e8a104c"
	I1217 00:43:39.184102  311182 cri.go:89] found id: "b2af3f621d169db6db7a50be514e4c022a2caa38e1084d576131e2475f388d5d"
	I1217 00:43:39.184106  311182 cri.go:89] found id: "4b34ed74185a723d1987fd893c6b89aa61e85dd77a4391ea83bf44f5d07a0931"
	I1217 00:43:39.184108  311182 cri.go:89] found id: "a590d671bfa52ffb77f09298e606dd5a6cef506d25bf7c749bd516cf65fabaab"
	I1217 00:43:39.184112  311182 cri.go:89] found id: "a12cf220a059b218df62a14f9045f72149c1009f3507c8c36e206fdf43dc9d57"
	I1217 00:43:39.184114  311182 cri.go:89] found id: "d592a6ba05b7b5e2d53ffd9b29510a47348394c0b8faf29e99d49dce869dbeff"
	I1217 00:43:39.184125  311182 cri.go:89] found id: "b3918d1baa01c6eee1e18e913b70130777045f1037ee97fb2baa52d82998123b"
	I1217 00:43:39.184128  311182 cri.go:89] found id: "e446ab4cc7b9aeb434956ba232e1f5873d98c50b20e63779da4e13870a2d7e30"
	I1217 00:43:39.184131  311182 cri.go:89] found id: ""
	I1217 00:43:39.184169  311182 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:43:39.195628  311182 retry.go:31] will retry after 268.990751ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:43:39Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:43:39.465120  311182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:43:39.478454  311182 pause.go:52] kubelet running: false
	I1217 00:43:39.478519  311182 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 00:43:39.641859  311182 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 00:43:39.641958  311182 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 00:43:39.710035  311182 cri.go:89] found id: "5587cb88805177c695bf1ec86ad55d11c6c3c94174e7b9d4fd7505596629efb9"
	I1217 00:43:39.710063  311182 cri.go:89] found id: "992ca65d8279bc176a68afbe577e49037ece762e1bbf7e625c4270f35d29840c"
	I1217 00:43:39.710068  311182 cri.go:89] found id: "c168143de300c008ec57bfb6f217961739426196e44b7a3fe545f9f941260c0a"
	I1217 00:43:39.710071  311182 cri.go:89] found id: "40909a37f96e05409eb1c53f56f9585bf17482d70eae48d671deb9c28e8a104c"
	I1217 00:43:39.710074  311182 cri.go:89] found id: "b2af3f621d169db6db7a50be514e4c022a2caa38e1084d576131e2475f388d5d"
	I1217 00:43:39.710078  311182 cri.go:89] found id: "4b34ed74185a723d1987fd893c6b89aa61e85dd77a4391ea83bf44f5d07a0931"
	I1217 00:43:39.710080  311182 cri.go:89] found id: "a590d671bfa52ffb77f09298e606dd5a6cef506d25bf7c749bd516cf65fabaab"
	I1217 00:43:39.710083  311182 cri.go:89] found id: "a12cf220a059b218df62a14f9045f72149c1009f3507c8c36e206fdf43dc9d57"
	I1217 00:43:39.710086  311182 cri.go:89] found id: "d592a6ba05b7b5e2d53ffd9b29510a47348394c0b8faf29e99d49dce869dbeff"
	I1217 00:43:39.710104  311182 cri.go:89] found id: "b3918d1baa01c6eee1e18e913b70130777045f1037ee97fb2baa52d82998123b"
	I1217 00:43:39.710110  311182 cri.go:89] found id: "e446ab4cc7b9aeb434956ba232e1f5873d98c50b20e63779da4e13870a2d7e30"
	I1217 00:43:39.710112  311182 cri.go:89] found id: ""
	I1217 00:43:39.710154  311182 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:43:39.722524  311182 retry.go:31] will retry after 298.709064ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:43:39Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:43:40.022087  311182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:43:40.035885  311182 pause.go:52] kubelet running: false
	I1217 00:43:40.035945  311182 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 00:43:40.180828  311182 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 00:43:40.180904  311182 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 00:43:40.250854  311182 cri.go:89] found id: "5587cb88805177c695bf1ec86ad55d11c6c3c94174e7b9d4fd7505596629efb9"
	I1217 00:43:40.250874  311182 cri.go:89] found id: "992ca65d8279bc176a68afbe577e49037ece762e1bbf7e625c4270f35d29840c"
	I1217 00:43:40.250879  311182 cri.go:89] found id: "c168143de300c008ec57bfb6f217961739426196e44b7a3fe545f9f941260c0a"
	I1217 00:43:40.250884  311182 cri.go:89] found id: "40909a37f96e05409eb1c53f56f9585bf17482d70eae48d671deb9c28e8a104c"
	I1217 00:43:40.250888  311182 cri.go:89] found id: "b2af3f621d169db6db7a50be514e4c022a2caa38e1084d576131e2475f388d5d"
	I1217 00:43:40.250892  311182 cri.go:89] found id: "4b34ed74185a723d1987fd893c6b89aa61e85dd77a4391ea83bf44f5d07a0931"
	I1217 00:43:40.250897  311182 cri.go:89] found id: "a590d671bfa52ffb77f09298e606dd5a6cef506d25bf7c749bd516cf65fabaab"
	I1217 00:43:40.250901  311182 cri.go:89] found id: "a12cf220a059b218df62a14f9045f72149c1009f3507c8c36e206fdf43dc9d57"
	I1217 00:43:40.250905  311182 cri.go:89] found id: "d592a6ba05b7b5e2d53ffd9b29510a47348394c0b8faf29e99d49dce869dbeff"
	I1217 00:43:40.250930  311182 cri.go:89] found id: "b3918d1baa01c6eee1e18e913b70130777045f1037ee97fb2baa52d82998123b"
	I1217 00:43:40.250936  311182 cri.go:89] found id: "e446ab4cc7b9aeb434956ba232e1f5873d98c50b20e63779da4e13870a2d7e30"
	I1217 00:43:40.250940  311182 cri.go:89] found id: ""
	I1217 00:43:40.250986  311182 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:43:40.268403  311182 out.go:203] 
	W1217 00:43:40.269598  311182 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:43:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:43:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:43:40.269618  311182 out.go:285] * 
	* 
	W1217 00:43:40.273909  311182 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:43:40.275099  311182 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-864613 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-864613
helpers_test.go:244: (dbg) docker inspect no-preload-864613:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d31578a000b6bc0fd7f6db18dfc484bf6d5c523079339ecebac6aa5e2a0209d9",
	        "Created": "2025-12-17T00:41:22.987777185Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 290462,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:42:35.021551855Z",
	            "FinishedAt": "2025-12-17T00:42:34.023149179Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/d31578a000b6bc0fd7f6db18dfc484bf6d5c523079339ecebac6aa5e2a0209d9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d31578a000b6bc0fd7f6db18dfc484bf6d5c523079339ecebac6aa5e2a0209d9/hostname",
	        "HostsPath": "/var/lib/docker/containers/d31578a000b6bc0fd7f6db18dfc484bf6d5c523079339ecebac6aa5e2a0209d9/hosts",
	        "LogPath": "/var/lib/docker/containers/d31578a000b6bc0fd7f6db18dfc484bf6d5c523079339ecebac6aa5e2a0209d9/d31578a000b6bc0fd7f6db18dfc484bf6d5c523079339ecebac6aa5e2a0209d9-json.log",
	        "Name": "/no-preload-864613",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-864613:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-864613",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d31578a000b6bc0fd7f6db18dfc484bf6d5c523079339ecebac6aa5e2a0209d9",
	                "LowerDir": "/var/lib/docker/overlay2/f190c06e656d738f85b08c978b5e137744361ddd53ad1e7f79ae34378398bcd5-init/diff:/var/lib/docker/overlay2/594b812fd6d8db89dab322ea9e00d43dd555e9709fb5e6953e3873cce717392c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f190c06e656d738f85b08c978b5e137744361ddd53ad1e7f79ae34378398bcd5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f190c06e656d738f85b08c978b5e137744361ddd53ad1e7f79ae34378398bcd5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f190c06e656d738f85b08c978b5e137744361ddd53ad1e7f79ae34378398bcd5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-864613",
	                "Source": "/var/lib/docker/volumes/no-preload-864613/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-864613",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-864613",
	                "name.minikube.sigs.k8s.io": "no-preload-864613",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "04de50d436722f7da70f41b390f15d4b9049c0521ef600919ec9cadf780c4d6a",
	            "SandboxKey": "/var/run/docker/netns/04de50d43672",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-864613": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f576aec2f4916437744d456261513e7c90cb52cd053227c69a0accdc704e8654",
	                    "EndpointID": "2d800be25b6aaf3b556b6b5936efdd7e9844a5fab6e18e247c68373baf3154f4",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "b2:d9:36:e8:ad:bc",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-864613",
	                        "d31578a000b6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-864613 -n no-preload-864613
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-864613 -n no-preload-864613: exit status 2 (321.848164ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-864613 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-864613 logs -n 25: (1.079179795s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ pause   │ -p old-k8s-version-742860 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-864613 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p no-preload-864613 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:43 UTC │
	│ delete  │ -p old-k8s-version-742860                                                                                                                                                                                                                            │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ delete  │ -p old-k8s-version-742860                                                                                                                                                                                                                            │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p newest-cni-653717 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-153232 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ stop    │ -p embed-certs-153232 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable metrics-server -p newest-cni-653717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-414413 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ stop    │ -p newest-cni-653717 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ stop    │ -p default-k8s-diff-port-414413 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable dashboard -p newest-cni-653717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p newest-cni-653717 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-153232 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p embed-certs-153232 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ image   │ newest-cni-653717 image list --format=json                                                                                                                                                                                                           │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ pause   │ -p newest-cni-653717 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-414413 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p default-k8s-diff-port-414413 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ delete  │ -p newest-cni-653717                                                                                                                                                                                                                                 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ delete  │ -p newest-cni-653717                                                                                                                                                                                                                                 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p auto-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-802249                  │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ image   │ no-preload-864613 image list --format=json                                                                                                                                                                                                           │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ pause   │ -p no-preload-864613 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:43:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:43:27.783899  307526 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:43:27.784188  307526 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:43:27.784198  307526 out.go:374] Setting ErrFile to fd 2...
	I1217 00:43:27.784205  307526 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:43:27.784420  307526 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:43:27.784980  307526 out.go:368] Setting JSON to false
	I1217 00:43:27.786356  307526 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5158,"bootTime":1765927050,"procs":319,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:43:27.786413  307526 start.go:143] virtualization: kvm guest
	I1217 00:43:27.792469  307526 out.go:179] * [auto-802249] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:43:27.794123  307526 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:43:27.794135  307526 notify.go:221] Checking for updates...
	I1217 00:43:27.796621  307526 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:43:27.798252  307526 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:43:27.800079  307526 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:43:27.801977  307526 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:43:27.803368  307526 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:43:27.805497  307526 config.go:182] Loaded profile config "default-k8s-diff-port-414413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:27.805617  307526 config.go:182] Loaded profile config "embed-certs-153232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:27.805718  307526 config.go:182] Loaded profile config "no-preload-864613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:43:27.805829  307526 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:43:27.834498  307526 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:43:27.834623  307526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:43:27.893922  307526 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 00:43:27.883816453 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:43:27.894056  307526 docker.go:319] overlay module found
	I1217 00:43:27.895774  307526 out.go:179] * Using the docker driver based on user configuration
	I1217 00:43:27.896798  307526 start.go:309] selected driver: docker
	I1217 00:43:27.896811  307526 start.go:927] validating driver "docker" against <nil>
	I1217 00:43:27.896822  307526 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:43:27.897491  307526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:43:27.960308  307526 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 00:43:27.949730665 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:43:27.960526  307526 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:43:27.960792  307526 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:43:27.962270  307526 out.go:179] * Using Docker driver with root privileges
	I1217 00:43:27.963165  307526 cni.go:84] Creating CNI manager for ""
	I1217 00:43:27.963225  307526 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:43:27.963236  307526 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 00:43:27.963300  307526 start.go:353] cluster config:
	{Name:auto-802249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-802249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1217 00:43:27.964435  307526 out.go:179] * Starting "auto-802249" primary control-plane node in "auto-802249" cluster
	I1217 00:43:27.965456  307526 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:43:27.966915  307526 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:43:27.967856  307526 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:43:27.967894  307526 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1217 00:43:27.967895  307526 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:43:27.967917  307526 cache.go:65] Caching tarball of preloaded images
	I1217 00:43:27.968039  307526 preload.go:238] Found /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 00:43:27.968054  307526 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 00:43:27.968179  307526 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/config.json ...
	I1217 00:43:27.968209  307526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/config.json: {Name:mk6a800c556cbb3f82d1d4ac2ca5b5edbc64dd1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:27.992177  307526 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:43:27.992201  307526 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:43:27.992224  307526 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:43:27.992259  307526 start.go:360] acquireMachinesLock for auto-802249: {Name:mkbccf009dcb23cd4ffd2a50ee9c72043c15e319 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:43:27.992359  307526 start.go:364] duration metric: took 79.06µs to acquireMachinesLock for "auto-802249"
	I1217 00:43:27.992387  307526 start.go:93] Provisioning new machine with config: &{Name:auto-802249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-802249 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:43:27.992483  307526 start.go:125] createHost starting for "" (driver="docker")
	I1217 00:43:23.316167  301437 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 00:43:23.321794  301437 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 00:43:23.323158  301437 api_server.go:141] control plane version: v1.34.2
	I1217 00:43:23.323187  301437 api_server.go:131] duration metric: took 1.0079864s to wait for apiserver health ...
	I1217 00:43:23.323198  301437 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:43:23.328161  301437 system_pods.go:59] 8 kube-system pods found
	I1217 00:43:23.328199  301437 system_pods.go:61] "coredns-66bc5c9577-vtspd" [aedf434b-e03e-479c-a8f2-199e28231d61] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:43:23.328211  301437 system_pods.go:61] "etcd-embed-certs-153232" [68a7a631-c79e-48d1-bd8d-1aafc2b61fcc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 00:43:23.328221  301437 system_pods.go:61] "kindnet-zffzt" [f06f5d73-eef9-4876-b0aa-862d58c18777] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 00:43:23.328232  301437 system_pods.go:61] "kube-apiserver-embed-certs-153232" [a0a484be-31c5-4471-b35c-7d059d9e1b00] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 00:43:23.328246  301437 system_pods.go:61] "kube-controller-manager-embed-certs-153232" [6fd01afb-bd8e-450b-9082-310ff94c5958] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 00:43:23.328263  301437 system_pods.go:61] "kube-proxy-82b8k" [68026912-6bcc-4aee-b806-51f967dc200f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 00:43:23.328275  301437 system_pods.go:61] "kube-scheduler-embed-certs-153232" [af854f70-8bef-44c5-ad64-197a3282d5c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 00:43:23.328288  301437 system_pods.go:61] "storage-provisioner" [ad4a1982-2da6-490d-bcba-f04782d2d9b8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:43:23.328296  301437 system_pods.go:74] duration metric: took 5.091218ms to wait for pod list to return data ...
	I1217 00:43:23.328306  301437 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:43:23.331475  301437 default_sa.go:45] found service account: "default"
	I1217 00:43:23.331498  301437 default_sa.go:55] duration metric: took 3.185353ms for default service account to be created ...
	I1217 00:43:23.331510  301437 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 00:43:23.335177  301437 system_pods.go:86] 8 kube-system pods found
	I1217 00:43:23.335208  301437 system_pods.go:89] "coredns-66bc5c9577-vtspd" [aedf434b-e03e-479c-a8f2-199e28231d61] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:43:23.335227  301437 system_pods.go:89] "etcd-embed-certs-153232" [68a7a631-c79e-48d1-bd8d-1aafc2b61fcc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 00:43:23.335238  301437 system_pods.go:89] "kindnet-zffzt" [f06f5d73-eef9-4876-b0aa-862d58c18777] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 00:43:23.335247  301437 system_pods.go:89] "kube-apiserver-embed-certs-153232" [a0a484be-31c5-4471-b35c-7d059d9e1b00] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 00:43:23.335255  301437 system_pods.go:89] "kube-controller-manager-embed-certs-153232" [6fd01afb-bd8e-450b-9082-310ff94c5958] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 00:43:23.335264  301437 system_pods.go:89] "kube-proxy-82b8k" [68026912-6bcc-4aee-b806-51f967dc200f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 00:43:23.335273  301437 system_pods.go:89] "kube-scheduler-embed-certs-153232" [af854f70-8bef-44c5-ad64-197a3282d5c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 00:43:23.335281  301437 system_pods.go:89] "storage-provisioner" [ad4a1982-2da6-490d-bcba-f04782d2d9b8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:43:23.335290  301437 system_pods.go:126] duration metric: took 3.772865ms to wait for k8s-apps to be running ...
	I1217 00:43:23.335300  301437 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 00:43:23.335346  301437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:43:23.351017  301437 system_svc.go:56] duration metric: took 15.681058ms WaitForService to wait for kubelet
	I1217 00:43:23.351048  301437 kubeadm.go:587] duration metric: took 3.059548515s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:43:23.351069  301437 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:43:23.353894  301437 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 00:43:23.353920  301437 node_conditions.go:123] node cpu capacity is 8
	I1217 00:43:23.353939  301437 node_conditions.go:105] duration metric: took 2.863427ms to run NodePressure ...
	I1217 00:43:23.353952  301437 start.go:242] waiting for startup goroutines ...
	I1217 00:43:23.353966  301437 start.go:247] waiting for cluster config update ...
	I1217 00:43:23.353983  301437 start.go:256] writing updated cluster config ...
	I1217 00:43:23.354303  301437 ssh_runner.go:195] Run: rm -f paused
	I1217 00:43:23.358200  301437 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:43:23.362406  301437 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vtspd" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 00:43:25.369130  301437 pod_ready.go:104] pod "coredns-66bc5c9577-vtspd" is not "Ready", error: <nil>
	W1217 00:43:27.871364  301437 pod_ready.go:104] pod "coredns-66bc5c9577-vtspd" is not "Ready", error: <nil>
	I1217 00:43:24.861152  306295 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-414413" ...
	I1217 00:43:24.861228  306295 cli_runner.go:164] Run: docker start default-k8s-diff-port-414413
	I1217 00:43:25.138682  306295 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Status}}
	I1217 00:43:25.161023  306295 kic.go:430] container "default-k8s-diff-port-414413" state is running.
	I1217 00:43:25.161708  306295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-414413
	I1217 00:43:25.195207  306295 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/config.json ...
	I1217 00:43:25.195443  306295 machine.go:94] provisionDockerMachine start ...
	I1217 00:43:25.195531  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:25.216531  306295 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:25.216872  306295 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1217 00:43:25.216891  306295 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:43:25.217780  306295 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54952->127.0.0.1:33103: read: connection reset by peer
	I1217 00:43:28.344965  306295 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-414413
	
	I1217 00:43:28.345032  306295 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-414413"
	I1217 00:43:28.345097  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:28.363417  306295 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:28.363640  306295 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1217 00:43:28.363653  306295 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-414413 && echo "default-k8s-diff-port-414413" | sudo tee /etc/hostname
	I1217 00:43:28.501277  306295 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-414413
	
	I1217 00:43:28.501360  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:28.520331  306295 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:28.520630  306295 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1217 00:43:28.520652  306295 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-414413' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-414413/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-414413' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:43:28.648245  306295 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:43:28.648275  306295 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:43:28.648294  306295 ubuntu.go:190] setting up certificates
	I1217 00:43:28.648307  306295 provision.go:84] configureAuth start
	I1217 00:43:28.648361  306295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-414413
	I1217 00:43:28.674762  306295 provision.go:143] copyHostCerts
	I1217 00:43:28.674833  306295 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:43:28.674851  306295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:43:28.674962  306295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:43:28.675161  306295 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:43:28.675175  306295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:43:28.675220  306295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:43:28.675301  306295 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:43:28.675313  306295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:43:28.675348  306295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:43:28.675415  306295 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-414413 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-414413 localhost minikube]
	I1217 00:43:28.708743  306295 provision.go:177] copyRemoteCerts
	I1217 00:43:28.708801  306295 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:43:28.708843  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:28.729865  306295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:43:28.845281  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:43:28.877349  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1217 00:43:28.902351  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 00:43:28.928580  306295 provision.go:87] duration metric: took 280.250857ms to configureAuth
	I1217 00:43:28.928613  306295 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:43:28.928801  306295 config.go:182] Loaded profile config "default-k8s-diff-port-414413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:28.929124  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:28.955949  306295 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:28.956300  306295 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1217 00:43:28.956325  306295 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:43:27.994357  307526 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 00:43:27.994631  307526 start.go:159] libmachine.API.Create for "auto-802249" (driver="docker")
	I1217 00:43:27.994662  307526 client.go:173] LocalClient.Create starting
	I1217 00:43:27.994709  307526 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem
	I1217 00:43:27.994740  307526 main.go:143] libmachine: Decoding PEM data...
	I1217 00:43:27.994760  307526 main.go:143] libmachine: Parsing certificate...
	I1217 00:43:27.994823  307526 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem
	I1217 00:43:27.994843  307526 main.go:143] libmachine: Decoding PEM data...
	I1217 00:43:27.994852  307526 main.go:143] libmachine: Parsing certificate...
	I1217 00:43:27.995194  307526 cli_runner.go:164] Run: docker network inspect auto-802249 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 00:43:28.013301  307526 cli_runner.go:211] docker network inspect auto-802249 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 00:43:28.013389  307526 network_create.go:284] running [docker network inspect auto-802249] to gather additional debugging logs...
	I1217 00:43:28.013413  307526 cli_runner.go:164] Run: docker network inspect auto-802249
	W1217 00:43:28.030618  307526 cli_runner.go:211] docker network inspect auto-802249 returned with exit code 1
	I1217 00:43:28.030651  307526 network_create.go:287] error running [docker network inspect auto-802249]: docker network inspect auto-802249: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-802249 not found
	I1217 00:43:28.030669  307526 network_create.go:289] output of [docker network inspect auto-802249]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-802249 not found
	
	** /stderr **
	I1217 00:43:28.030812  307526 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:43:28.050434  307526 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ffd1d738f01 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3d:52:75:47:82} reservation:<nil>}
	I1217 00:43:28.051191  307526 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-280edd437675 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:ae:02:b5:f9:a6} reservation:<nil>}
	I1217 00:43:28.051887  307526 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9f28d049043c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:3f:8e:e9:44:56} reservation:<nil>}
	I1217 00:43:28.052544  307526 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a57026acfc12 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:aa:e6:32:39:49:3b} reservation:<nil>}
	I1217 00:43:28.053095  307526 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-a0b8f164bc66 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ae:bf:0f:c2:a1:7a} reservation:<nil>}
	I1217 00:43:28.054051  307526 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f3a140}
	I1217 00:43:28.054075  307526 network_create.go:124] attempt to create docker network auto-802249 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1217 00:43:28.054125  307526 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-802249 auto-802249
	I1217 00:43:28.104462  307526 network_create.go:108] docker network auto-802249 192.168.94.0/24 created
	I1217 00:43:28.104500  307526 kic.go:121] calculated static IP "192.168.94.2" for the "auto-802249" container
	I1217 00:43:28.104582  307526 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 00:43:28.123300  307526 cli_runner.go:164] Run: docker volume create auto-802249 --label name.minikube.sigs.k8s.io=auto-802249 --label created_by.minikube.sigs.k8s.io=true
	I1217 00:43:28.142730  307526 oci.go:103] Successfully created a docker volume auto-802249
	I1217 00:43:28.142802  307526 cli_runner.go:164] Run: docker run --rm --name auto-802249-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-802249 --entrypoint /usr/bin/test -v auto-802249:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 00:43:28.898458  307526 oci.go:107] Successfully prepared a docker volume auto-802249
	I1217 00:43:28.898530  307526 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:43:28.898544  307526 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 00:43:28.898637  307526 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-802249:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	W1217 00:43:30.368852  301437 pod_ready.go:104] pod "coredns-66bc5c9577-vtspd" is not "Ready", error: <nil>
	W1217 00:43:32.867740  301437 pod_ready.go:104] pod "coredns-66bc5c9577-vtspd" is not "Ready", error: <nil>
	I1217 00:43:29.681399  306295 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:43:29.681431  306295 machine.go:97] duration metric: took 4.48596948s to provisionDockerMachine
	I1217 00:43:29.681447  306295 start.go:293] postStartSetup for "default-k8s-diff-port-414413" (driver="docker")
	I1217 00:43:29.681462  306295 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:43:29.681523  306295 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:43:29.681578  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:29.705905  306295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:43:29.812429  306295 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:43:29.816922  306295 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:43:29.816955  306295 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:43:29.816967  306295 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:43:29.817052  306295 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:43:29.817160  306295 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:43:29.817292  306295 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:43:29.827352  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:43:29.849430  306295 start.go:296] duration metric: took 167.967034ms for postStartSetup
	I1217 00:43:29.849524  306295 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:43:29.849572  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:29.873138  306295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:43:29.973222  306295 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:43:29.979217  306295 fix.go:56] duration metric: took 5.137583072s for fixHost
	I1217 00:43:29.979245  306295 start.go:83] releasing machines lock for "default-k8s-diff-port-414413", held for 5.137641613s
	I1217 00:43:29.979313  306295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-414413
	I1217 00:43:30.003017  306295 ssh_runner.go:195] Run: cat /version.json
	I1217 00:43:30.003088  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:30.003099  306295 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:43:30.003224  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:30.027827  306295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:43:30.027827  306295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:43:30.203372  306295 ssh_runner.go:195] Run: systemctl --version
	I1217 00:43:30.212836  306295 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:43:30.251227  306295 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:43:30.256929  306295 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:43:30.257011  306295 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:43:30.267178  306295 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:43:30.267206  306295 start.go:496] detecting cgroup driver to use...
	I1217 00:43:30.267236  306295 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:43:30.267279  306295 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:43:30.287349  306295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:43:30.303766  306295 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:43:30.303824  306295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:43:30.323548  306295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:43:30.340545  306295 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:43:30.459414  306295 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:43:30.576563  306295 docker.go:234] disabling docker service ...
	I1217 00:43:30.576631  306295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:43:30.597770  306295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:43:30.615190  306295 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:43:30.735085  306295 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:43:30.854647  306295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:43:30.871811  306295 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:43:30.892149  306295 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:43:30.892212  306295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:30.904058  306295 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:43:30.904190  306295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:30.917063  306295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:30.928165  306295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:30.940098  306295 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:43:30.951468  306295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:30.963044  306295 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:30.975338  306295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:30.988168  306295 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:43:30.998153  306295 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:43:31.008817  306295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:31.123938  306295 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:43:33.676337  306295 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.552363306s)
	I1217 00:43:33.676369  306295 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:43:33.676417  306295 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:43:33.680500  306295 start.go:564] Will wait 60s for crictl version
	I1217 00:43:33.680561  306295 ssh_runner.go:195] Run: which crictl
	I1217 00:43:33.684471  306295 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:43:33.709533  306295 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:43:33.709601  306295 ssh_runner.go:195] Run: crio --version
	I1217 00:43:33.738615  306295 ssh_runner.go:195] Run: crio --version
	I1217 00:43:33.775920  306295 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 00:43:33.777142  306295 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-414413 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:43:33.795353  306295 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 00:43:33.800123  306295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:43:33.811084  306295 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-414413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-414413 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:43:33.811227  306295 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:43:33.811279  306295 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:43:33.844811  306295 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:43:33.844831  306295 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:43:33.844887  306295 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:43:33.870923  306295 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:43:33.870948  306295 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:43:33.870957  306295 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.2 crio true true} ...
	I1217 00:43:33.871087  306295 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-414413 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-414413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:43:33.871165  306295 ssh_runner.go:195] Run: crio config
	I1217 00:43:33.925371  306295 cni.go:84] Creating CNI manager for ""
	I1217 00:43:33.925393  306295 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:43:33.925409  306295 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:43:33.925433  306295 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-414413 NodeName:default-k8s-diff-port-414413 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:43:33.925587  306295 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-414413"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:43:33.925651  306295 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 00:43:33.934954  306295 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:43:33.935047  306295 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:43:33.943677  306295 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1217 00:43:33.958880  306295 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 00:43:33.973470  306295 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1217 00:43:33.987696  306295 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:43:33.992165  306295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:43:34.001928  306295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:34.092409  306295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:43:34.108980  306295 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413 for IP: 192.168.76.2
	I1217 00:43:34.109018  306295 certs.go:195] generating shared ca certs ...
	I1217 00:43:34.109037  306295 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:34.109188  306295 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:43:34.109255  306295 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:43:34.109271  306295 certs.go:257] generating profile certs ...
	I1217 00:43:34.109424  306295 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/client.key
	I1217 00:43:34.110428  306295 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/apiserver.key.0797176d
	I1217 00:43:34.110528  306295 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/proxy-client.key
	I1217 00:43:34.110676  306295 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:43:34.110725  306295 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:43:34.110735  306295 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:43:34.110772  306295 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:43:34.110806  306295 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:43:34.111146  306295 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:43:34.111247  306295 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:43:34.112136  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:43:34.139905  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:43:34.163220  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:43:34.188725  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:43:34.228416  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 00:43:34.256716  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:43:34.280930  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:43:34.300601  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:43:34.321878  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:43:34.341964  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:43:34.361554  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:43:34.379883  306295 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:43:34.393397  306295 ssh_runner.go:195] Run: openssl version
	I1217 00:43:34.400563  306295 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:43:34.408550  306295 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:43:34.416706  306295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:43:34.420711  306295 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:43:34.420765  306295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	I1217 00:43:34.458970  306295 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:43:34.467164  306295 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:34.475301  306295 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:43:34.483588  306295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:34.487260  306295 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:34.487328  306295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:34.523803  306295 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:43:34.531254  306295 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16354.pem
	I1217 00:43:34.538816  306295 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16354.pem /etc/ssl/certs/16354.pem
	I1217 00:43:34.546250  306295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16354.pem
	I1217 00:43:34.550565  306295 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:13 /usr/share/ca-certificates/16354.pem
	I1217 00:43:34.550620  306295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16354.pem
	I1217 00:43:34.595335  306295 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:43:34.603361  306295 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:43:34.607349  306295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:43:34.645208  306295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:43:34.690905  306295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:43:34.737902  306295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:43:34.794952  306295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:43:34.844915  306295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:43:34.882722  306295 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-414413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-414413 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:43:34.882789  306295 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:43:34.882842  306295 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:43:34.917334  306295 cri.go:89] found id: "2a7b291de067a5044f406eaa0104c52261424e3730e6c2e4d38864b41943eddd"
	I1217 00:43:34.917357  306295 cri.go:89] found id: "4dcc77a289bba808ececc2d4f0efa70e966e843b2057d6de5ad0054d0be435c8"
	I1217 00:43:34.917363  306295 cri.go:89] found id: "ba3df04c6b3feaf2f234a1a9b098c1269d844cdbaf6531304d6ddd40b10820d5"
	I1217 00:43:34.917368  306295 cri.go:89] found id: "eecadcae34c3698337c66c6d6dbab2066993e3216b64d194344407552bc449b5"
	I1217 00:43:34.917373  306295 cri.go:89] found id: ""
	I1217 00:43:34.917413  306295 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 00:43:34.930607  306295 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:43:34Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:43:34.930674  306295 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:43:34.938821  306295 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:43:34.938837  306295 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:43:34.938875  306295 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:43:34.946910  306295 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:43:34.948063  306295 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-414413" does not appear in /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:43:34.948899  306295 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-12816/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-414413" cluster setting kubeconfig missing "default-k8s-diff-port-414413" context setting]
	I1217 00:43:34.950019  306295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:34.952215  306295 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:43:34.961195  306295 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1217 00:43:34.961224  306295 kubeadm.go:602] duration metric: took 22.380608ms to restartPrimaryControlPlane
	I1217 00:43:34.961233  306295 kubeadm.go:403] duration metric: took 78.517227ms to StartCluster
	I1217 00:43:34.961249  306295 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:34.961307  306295 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:43:34.963205  306295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:34.963466  306295 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:43:34.963687  306295 config.go:182] Loaded profile config "default-k8s-diff-port-414413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:34.963736  306295 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:43:34.963816  306295 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-414413"
	I1217 00:43:34.963837  306295 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-414413"
	W1217 00:43:34.963845  306295 addons.go:248] addon storage-provisioner should already be in state true
	I1217 00:43:34.963874  306295 host.go:66] Checking if "default-k8s-diff-port-414413" exists ...
	I1217 00:43:34.964351  306295 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Status}}
	I1217 00:43:34.964420  306295 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-414413"
	I1217 00:43:34.964441  306295 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-414413"
	W1217 00:43:34.964449  306295 addons.go:248] addon dashboard should already be in state true
	I1217 00:43:34.964470  306295 host.go:66] Checking if "default-k8s-diff-port-414413" exists ...
	I1217 00:43:34.964505  306295 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-414413"
	I1217 00:43:34.964525  306295 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-414413"
	I1217 00:43:34.964803  306295 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Status}}
	I1217 00:43:34.964969  306295 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Status}}
	I1217 00:43:34.967219  306295 out.go:179] * Verifying Kubernetes components...
	I1217 00:43:34.968657  306295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:34.993484  306295 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-414413"
	W1217 00:43:34.993507  306295 addons.go:248] addon default-storageclass should already be in state true
	I1217 00:43:34.993533  306295 host.go:66] Checking if "default-k8s-diff-port-414413" exists ...
	I1217 00:43:34.993963  306295 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Status}}
	I1217 00:43:34.995235  306295 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:43:34.995246  306295 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 00:43:34.996518  306295 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:43:34.997338  306295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:43:34.996560  306295 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 00:43:33.557934  307526 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-802249:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.659238848s)
	I1217 00:43:33.557969  307526 kic.go:203] duration metric: took 4.659422421s to extract preloaded images to volume ...
	W1217 00:43:33.558064  307526 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 00:43:33.558109  307526 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 00:43:33.558147  307526 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 00:43:33.614832  307526 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-802249 --name auto-802249 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-802249 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-802249 --network auto-802249 --ip 192.168.94.2 --volume auto-802249:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 00:43:33.899162  307526 cli_runner.go:164] Run: docker container inspect auto-802249 --format={{.State.Running}}
	I1217 00:43:33.918604  307526 cli_runner.go:164] Run: docker container inspect auto-802249 --format={{.State.Status}}
	I1217 00:43:33.939167  307526 cli_runner.go:164] Run: docker exec auto-802249 stat /var/lib/dpkg/alternatives/iptables
	I1217 00:43:33.991527  307526 oci.go:144] the created container "auto-802249" has a running status.
	I1217 00:43:33.991551  307526 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/auto-802249/id_rsa...
	I1217 00:43:34.093291  307526 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-12816/.minikube/machines/auto-802249/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 00:43:34.123193  307526 cli_runner.go:164] Run: docker container inspect auto-802249 --format={{.State.Status}}
	I1217 00:43:34.149038  307526 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 00:43:34.149065  307526 kic_runner.go:114] Args: [docker exec --privileged auto-802249 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 00:43:34.202194  307526 cli_runner.go:164] Run: docker container inspect auto-802249 --format={{.State.Status}}
	I1217 00:43:34.231461  307526 machine.go:94] provisionDockerMachine start ...
	I1217 00:43:34.231554  307526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-802249
	I1217 00:43:34.257565  307526 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:34.257906  307526 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1217 00:43:34.257925  307526 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:43:34.396715  307526 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-802249
	
	I1217 00:43:34.396747  307526 ubuntu.go:182] provisioning hostname "auto-802249"
	I1217 00:43:34.396806  307526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-802249
	I1217 00:43:34.416957  307526 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:34.417264  307526 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1217 00:43:34.417285  307526 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-802249 && echo "auto-802249" | sudo tee /etc/hostname
	I1217 00:43:34.556089  307526 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-802249
	
	I1217 00:43:34.556171  307526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-802249
	I1217 00:43:34.576140  307526 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:34.576361  307526 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1217 00:43:34.576379  307526 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-802249' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-802249/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-802249' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:43:34.703765  307526 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:43:34.703789  307526 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:43:34.703820  307526 ubuntu.go:190] setting up certificates
	I1217 00:43:34.703831  307526 provision.go:84] configureAuth start
	I1217 00:43:34.703873  307526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-802249
	I1217 00:43:34.725439  307526 provision.go:143] copyHostCerts
	I1217 00:43:34.725502  307526 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:43:34.725516  307526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:43:34.725581  307526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:43:34.725704  307526 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:43:34.725717  307526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:43:34.725761  307526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:43:34.725861  307526 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:43:34.725877  307526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:43:34.725916  307526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:43:34.726020  307526 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.auto-802249 san=[127.0.0.1 192.168.94.2 auto-802249 localhost minikube]
	I1217 00:43:34.751321  307526 provision.go:177] copyRemoteCerts
	I1217 00:43:34.751379  307526 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:43:34.751409  307526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-802249
	I1217 00:43:34.775156  307526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/auto-802249/id_rsa Username:docker}
	I1217 00:43:34.880372  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1217 00:43:34.902413  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:43:34.922726  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:43:34.940875  307526 provision.go:87] duration metric: took 237.020186ms to configureAuth
	I1217 00:43:34.940900  307526 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:43:34.941119  307526 config.go:182] Loaded profile config "auto-802249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:34.941239  307526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-802249
	I1217 00:43:34.962212  307526 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:34.962501  307526 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1217 00:43:34.962526  307526 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:43:35.300957  307526 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:43:35.300986  307526 machine.go:97] duration metric: took 1.069500633s to provisionDockerMachine
	I1217 00:43:35.301067  307526 client.go:176] duration metric: took 7.306395274s to LocalClient.Create
	I1217 00:43:35.301094  307526 start.go:167] duration metric: took 7.306462514s to libmachine.API.Create "auto-802249"
	I1217 00:43:35.301109  307526 start.go:293] postStartSetup for "auto-802249" (driver="docker")
	I1217 00:43:35.301122  307526 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:43:35.301201  307526 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:43:35.301250  307526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-802249
	I1217 00:43:35.323372  307526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/auto-802249/id_rsa Username:docker}
	I1217 00:43:35.421567  307526 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:43:35.425242  307526 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:43:35.425271  307526 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:43:35.425282  307526 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:43:35.425335  307526 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:43:35.425425  307526 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:43:35.425534  307526 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:43:35.433362  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:43:35.453775  307526 start.go:296] duration metric: took 152.651929ms for postStartSetup
	I1217 00:43:35.454172  307526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-802249
	I1217 00:43:35.472981  307526 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/config.json ...
	I1217 00:43:35.473360  307526 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:43:35.473412  307526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-802249
	I1217 00:43:35.491304  307526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/auto-802249/id_rsa Username:docker}
	I1217 00:43:35.581179  307526 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:43:35.585822  307526 start.go:128] duration metric: took 7.593323059s to createHost
	I1217 00:43:35.585847  307526 start.go:83] releasing machines lock for "auto-802249", held for 7.593474141s
	I1217 00:43:35.585914  307526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-802249
	I1217 00:43:35.610124  307526 ssh_runner.go:195] Run: cat /version.json
	I1217 00:43:35.610181  307526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-802249
	I1217 00:43:35.610180  307526 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:43:35.610256  307526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-802249
	I1217 00:43:35.630650  307526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/auto-802249/id_rsa Username:docker}
	I1217 00:43:35.632182  307526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/auto-802249/id_rsa Username:docker}
	I1217 00:43:35.799876  307526 ssh_runner.go:195] Run: systemctl --version
	I1217 00:43:35.806697  307526 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:43:35.847614  307526 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:43:35.853164  307526 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:43:35.853241  307526 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:43:35.884086  307526 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 00:43:35.884111  307526 start.go:496] detecting cgroup driver to use...
	I1217 00:43:35.884140  307526 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:43:35.884187  307526 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:43:35.904246  307526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:43:35.918195  307526 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:43:35.918257  307526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:43:35.940080  307526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:43:35.960661  307526 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:43:36.066109  307526 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:43:36.172766  307526 docker.go:234] disabling docker service ...
	I1217 00:43:36.172840  307526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:43:36.191820  307526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:43:36.205981  307526 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:43:36.304667  307526 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:43:36.396978  307526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:43:36.409464  307526 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:43:36.423426  307526 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:43:36.423496  307526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:36.433722  307526 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:43:36.433783  307526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:36.447105  307526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:36.456824  307526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:36.465936  307526 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:43:36.474179  307526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:36.482687  307526 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:36.498792  307526 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:36.509491  307526 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:43:36.520216  307526 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:43:36.531304  307526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:36.639178  307526 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:43:36.830624  307526 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:43:36.830701  307526 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:43:36.835398  307526 start.go:564] Will wait 60s for crictl version
	I1217 00:43:36.835461  307526 ssh_runner.go:195] Run: which crictl
	I1217 00:43:36.839344  307526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:43:36.867636  307526 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:43:36.867718  307526 ssh_runner.go:195] Run: crio --version
	I1217 00:43:36.898540  307526 ssh_runner.go:195] Run: crio --version
	I1217 00:43:36.935544  307526 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 00:43:34.997453  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:34.998447  306295 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 00:43:34.998463  306295 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 00:43:34.998517  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:35.029235  306295 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:43:35.029322  306295 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:43:35.029420  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:35.034809  306295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:43:35.039130  306295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:43:35.058900  306295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:43:35.131678  306295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:43:35.147894  306295 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-414413" to be "Ready" ...
	I1217 00:43:35.152120  306295 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 00:43:35.152140  306295 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 00:43:35.156835  306295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:43:35.167716  306295 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 00:43:35.167737  306295 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 00:43:35.175633  306295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:43:35.186148  306295 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 00:43:35.186174  306295 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 00:43:35.207150  306295 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 00:43:35.207173  306295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 00:43:35.224666  306295 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 00:43:35.224693  306295 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 00:43:35.244612  306295 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 00:43:35.244644  306295 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 00:43:35.260605  306295 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 00:43:35.260625  306295 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 00:43:35.276411  306295 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 00:43:35.276439  306295 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 00:43:35.289594  306295 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 00:43:35.289610  306295 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 00:43:35.304047  306295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 00:43:36.618697  306295 node_ready.go:49] node "default-k8s-diff-port-414413" is "Ready"
	I1217 00:43:36.618730  306295 node_ready.go:38] duration metric: took 1.470803516s for node "default-k8s-diff-port-414413" to be "Ready" ...
	I1217 00:43:36.618747  306295 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:43:36.618799  306295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:37.195015  306295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.038102082s)
	I1217 00:43:37.195017  306295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.019330508s)
	I1217 00:43:37.195167  306295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.891083549s)
	I1217 00:43:37.195199  306295 api_server.go:72] duration metric: took 2.231702348s to wait for apiserver process to appear ...
	I1217 00:43:37.195214  306295 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:43:37.195235  306295 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1217 00:43:37.196936  306295 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-414413 addons enable metrics-server
	
	I1217 00:43:37.200106  306295 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 00:43:37.200129  306295 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 00:43:37.203574  306295 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1217 00:43:36.937801  307526 cli_runner.go:164] Run: docker network inspect auto-802249 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:43:36.957458  307526 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1217 00:43:36.962277  307526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:43:36.973333  307526 kubeadm.go:884] updating cluster {Name:auto-802249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-802249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:43:36.973441  307526 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:43:36.973502  307526 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:43:37.009823  307526 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:43:37.009856  307526 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:43:37.009925  307526 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:43:37.040414  307526 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:43:37.040440  307526 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:43:37.040450  307526 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1217 00:43:37.040559  307526 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-802249 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:auto-802249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:43:37.040646  307526 ssh_runner.go:195] Run: crio config
	I1217 00:43:37.101505  307526 cni.go:84] Creating CNI manager for ""
	I1217 00:43:37.101530  307526 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:43:37.101546  307526 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:43:37.101567  307526 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-802249 NodeName:auto-802249 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:43:37.101690  307526 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-802249"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:43:37.101757  307526 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 00:43:37.110328  307526 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:43:37.110392  307526 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:43:37.119275  307526 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1217 00:43:37.132926  307526 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 00:43:37.147118  307526 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1217 00:43:37.160480  307526 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:43:37.164140  307526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:43:37.175007  307526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:37.280360  307526 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:43:37.305519  307526 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249 for IP: 192.168.94.2
	I1217 00:43:37.305542  307526 certs.go:195] generating shared ca certs ...
	I1217 00:43:37.305562  307526 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:37.305725  307526 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:43:37.305789  307526 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:43:37.305802  307526 certs.go:257] generating profile certs ...
	I1217 00:43:37.305867  307526 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/client.key
	I1217 00:43:37.305891  307526 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/client.crt with IP's: []
	I1217 00:43:37.344609  307526 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/client.crt ...
	I1217 00:43:37.344636  307526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/client.crt: {Name:mk5d53455946f112a1748aa6d9e7b0453a9bcfeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:37.344792  307526 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/client.key ...
	I1217 00:43:37.344806  307526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/client.key: {Name:mk4136a4b5cdab991b4548f1fda38b61fac41c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:37.344881  307526 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.key.9f1d7504
	I1217 00:43:37.344898  307526 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.crt.9f1d7504 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1217 00:43:37.381137  307526 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.crt.9f1d7504 ...
	I1217 00:43:37.381162  307526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.crt.9f1d7504: {Name:mk538b4009d544e7e5844aadc3ac0377c048b69f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:37.381316  307526 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.key.9f1d7504 ...
	I1217 00:43:37.381329  307526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.key.9f1d7504: {Name:mk59291695510cb10614430a790825e45e435105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:37.381397  307526 certs.go:382] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.crt.9f1d7504 -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.crt
	I1217 00:43:37.381479  307526 certs.go:386] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.key.9f1d7504 -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.key
	I1217 00:43:37.381550  307526 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/proxy-client.key
	I1217 00:43:37.381565  307526 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/proxy-client.crt with IP's: []
	I1217 00:43:37.467073  307526 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/proxy-client.crt ...
	I1217 00:43:37.467097  307526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/proxy-client.crt: {Name:mkde7eadaed81a1981a7c6ffa4efc6b06449235e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:37.467256  307526 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/proxy-client.key ...
	I1217 00:43:37.467268  307526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/proxy-client.key: {Name:mk6bb1a90a6259966890f42cf520f07ee481acb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:37.467429  307526 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:43:37.467469  307526 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:43:37.467479  307526 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:43:37.467512  307526 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:43:37.467538  307526 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:43:37.467561  307526 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:43:37.467600  307526 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:43:37.468184  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:43:37.486439  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:43:37.504781  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:43:37.525836  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:43:37.548869  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1217 00:43:37.574169  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 00:43:37.595474  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:43:37.621108  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:43:37.642499  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:43:37.668595  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:43:37.689079  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:43:37.709377  307526 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:43:37.724815  307526 ssh_runner.go:195] Run: openssl version
	I1217 00:43:37.732086  307526 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:43:37.741910  307526 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:43:37.751162  307526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:43:37.755733  307526 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:43:37.755789  307526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	W1217 00:43:34.868227  301437 pod_ready.go:104] pod "coredns-66bc5c9577-vtspd" is not "Ready", error: <nil>
	W1217 00:43:36.869681  301437 pod_ready.go:104] pod "coredns-66bc5c9577-vtspd" is not "Ready", error: <nil>
	I1217 00:43:37.206683  306295 addons.go:530] duration metric: took 2.242945901s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 00:43:37.696054  306295 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1217 00:43:37.701802  306295 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 00:43:37.701854  306295 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 00:43:38.196211  306295 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1217 00:43:38.201128  306295 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1217 00:43:38.202440  306295 api_server.go:141] control plane version: v1.34.2
	I1217 00:43:38.202469  306295 api_server.go:131] duration metric: took 1.00724732s to wait for apiserver health ...
	I1217 00:43:38.202480  306295 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:43:38.207407  306295 system_pods.go:59] 8 kube-system pods found
	I1217 00:43:38.207442  306295 system_pods.go:61] "coredns-66bc5c9577-v76f4" [1370bcd6-f828-4ed0-af58-d2d87c7044bd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:43:38.207453  306295 system_pods.go:61] "etcd-default-k8s-diff-port-414413" [286460a9-8a6c-4939-a2a0-0d5b31620d9a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 00:43:38.207463  306295 system_pods.go:61] "kindnet-hxhbf" [a4c2ed1b-ad48-484e-b779-4b93f3a72d0b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 00:43:38.207471  306295 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-414413" [aa792fc5-63c2-4287-802e-c99c70a9ab2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 00:43:38.207487  306295 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-414413" [e9a02305-5b73-4867-8605-48c8202cf5dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 00:43:38.207499  306295 system_pods.go:61] "kube-proxy-prlkw" [9a4571d0-7682-4838-aeb3-ccb4480157b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 00:43:38.207513  306295 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-414413" [a71da427-5b35-43f4-827b-62a96fdfda42] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 00:43:38.207524  306295 system_pods.go:61] "storage-provisioner" [0405b749-23a9-4449-90ac-59daf539647b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:43:38.207532  306295 system_pods.go:74] duration metric: took 5.045537ms to wait for pod list to return data ...
	I1217 00:43:38.207546  306295 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:43:38.210183  306295 default_sa.go:45] found service account: "default"
	I1217 00:43:38.210203  306295 default_sa.go:55] duration metric: took 2.637116ms for default service account to be created ...
	I1217 00:43:38.210213  306295 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 00:43:38.213607  306295 system_pods.go:86] 8 kube-system pods found
	I1217 00:43:38.213681  306295 system_pods.go:89] "coredns-66bc5c9577-v76f4" [1370bcd6-f828-4ed0-af58-d2d87c7044bd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:43:38.213702  306295 system_pods.go:89] "etcd-default-k8s-diff-port-414413" [286460a9-8a6c-4939-a2a0-0d5b31620d9a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 00:43:38.213714  306295 system_pods.go:89] "kindnet-hxhbf" [a4c2ed1b-ad48-484e-b779-4b93f3a72d0b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 00:43:38.213728  306295 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-414413" [aa792fc5-63c2-4287-802e-c99c70a9ab2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 00:43:38.213743  306295 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-414413" [e9a02305-5b73-4867-8605-48c8202cf5dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 00:43:38.213757  306295 system_pods.go:89] "kube-proxy-prlkw" [9a4571d0-7682-4838-aeb3-ccb4480157b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 00:43:38.213770  306295 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-414413" [a71da427-5b35-43f4-827b-62a96fdfda42] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 00:43:38.213782  306295 system_pods.go:89] "storage-provisioner" [0405b749-23a9-4449-90ac-59daf539647b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:43:38.213793  306295 system_pods.go:126] duration metric: took 3.573126ms to wait for k8s-apps to be running ...
	I1217 00:43:38.213806  306295 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 00:43:38.213863  306295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:43:38.229614  306295 system_svc.go:56] duration metric: took 15.79907ms WaitForService to wait for kubelet
	I1217 00:43:38.229647  306295 kubeadm.go:587] duration metric: took 3.266149884s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:43:38.229669  306295 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:43:38.236003  306295 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 00:43:38.236031  306295 node_conditions.go:123] node cpu capacity is 8
	I1217 00:43:38.236048  306295 node_conditions.go:105] duration metric: took 6.372563ms to run NodePressure ...
	I1217 00:43:38.236063  306295 start.go:242] waiting for startup goroutines ...
	I1217 00:43:38.236076  306295 start.go:247] waiting for cluster config update ...
	I1217 00:43:38.236092  306295 start.go:256] writing updated cluster config ...
	I1217 00:43:38.236347  306295 ssh_runner.go:195] Run: rm -f paused
	I1217 00:43:38.240301  306295 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:43:38.243629  306295 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-v76f4" in "kube-system" namespace to be "Ready" or be gone ...
	
	
	==> CRI-O <==
	Dec 17 00:43:05 no-preload-864613 crio[569]: time="2025-12-17T00:43:05.097951853Z" level=info msg="Started container" PID=1725 containerID=0c06d21d12bd976afad02c72487a39a9b15d6c50af9d84c2208a4f7f406093b3 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8/dashboard-metrics-scraper id=14c9eb41-5b3f-4f0c-bb07-676e6f4377dd name=/runtime.v1.RuntimeService/StartContainer sandboxID=4cf20fadb03c18e861e72a26441d7f22bbc09f6a939f9398dc24b01fde7b1fef
	Dec 17 00:43:06 no-preload-864613 crio[569]: time="2025-12-17T00:43:06.025010093Z" level=info msg="Removing container: 3028f2e3831cc335e16389f8a1488de719f1c76e83ababeed3ab223565c1cd4b" id=c02f5cec-d6b1-4bbd-85d1-009538d9562d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:43:06 no-preload-864613 crio[569]: time="2025-12-17T00:43:06.035125676Z" level=info msg="Removed container 3028f2e3831cc335e16389f8a1488de719f1c76e83ababeed3ab223565c1cd4b: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8/dashboard-metrics-scraper" id=c02f5cec-d6b1-4bbd-85d1-009538d9562d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:43:16 no-preload-864613 crio[569]: time="2025-12-17T00:43:16.052026324Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d110c010-d3e9-4942-8ca2-2d14e4d206a4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:16 no-preload-864613 crio[569]: time="2025-12-17T00:43:16.052975502Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=20ebbd59-74fd-4d63-93f2-28f990483020 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:16 no-preload-864613 crio[569]: time="2025-12-17T00:43:16.054097723Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=913b2b93-b430-481e-96bc-0e2a389538f7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:16 no-preload-864613 crio[569]: time="2025-12-17T00:43:16.054240702Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:16 no-preload-864613 crio[569]: time="2025-12-17T00:43:16.059350014Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:16 no-preload-864613 crio[569]: time="2025-12-17T00:43:16.059527486Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/748402e35e40c3bed5e388c284651ef045b6b1cdbab11d514aa77527819ddf63/merged/etc/passwd: no such file or directory"
	Dec 17 00:43:16 no-preload-864613 crio[569]: time="2025-12-17T00:43:16.059562211Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/748402e35e40c3bed5e388c284651ef045b6b1cdbab11d514aa77527819ddf63/merged/etc/group: no such file or directory"
	Dec 17 00:43:16 no-preload-864613 crio[569]: time="2025-12-17T00:43:16.05978406Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:16 no-preload-864613 crio[569]: time="2025-12-17T00:43:16.095102048Z" level=info msg="Created container 5587cb88805177c695bf1ec86ad55d11c6c3c94174e7b9d4fd7505596629efb9: kube-system/storage-provisioner/storage-provisioner" id=913b2b93-b430-481e-96bc-0e2a389538f7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:16 no-preload-864613 crio[569]: time="2025-12-17T00:43:16.095892065Z" level=info msg="Starting container: 5587cb88805177c695bf1ec86ad55d11c6c3c94174e7b9d4fd7505596629efb9" id=82e9ce79-32bf-43ec-8b3b-4c6c638162b6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:43:16 no-preload-864613 crio[569]: time="2025-12-17T00:43:16.098068439Z" level=info msg="Started container" PID=1744 containerID=5587cb88805177c695bf1ec86ad55d11c6c3c94174e7b9d4fd7505596629efb9 description=kube-system/storage-provisioner/storage-provisioner id=82e9ce79-32bf-43ec-8b3b-4c6c638162b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=37813881ab61336036a44898497a25042d9bc5770da5f59bafddaf05f62f319f
	Dec 17 00:43:28 no-preload-864613 crio[569]: time="2025-12-17T00:43:28.926099582Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6b52d515-4ccc-481e-81e1-90ea31f90d4a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:28 no-preload-864613 crio[569]: time="2025-12-17T00:43:28.927498192Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=34b19578-5cbb-47ec-b17a-c30090ec9982 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:28 no-preload-864613 crio[569]: time="2025-12-17T00:43:28.928631944Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8/dashboard-metrics-scraper" id=af62e3aa-97df-403e-9d63-c36881ad5628 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:28 no-preload-864613 crio[569]: time="2025-12-17T00:43:28.928902155Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:28 no-preload-864613 crio[569]: time="2025-12-17T00:43:28.938529294Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:28 no-preload-864613 crio[569]: time="2025-12-17T00:43:28.939292336Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:28 no-preload-864613 crio[569]: time="2025-12-17T00:43:28.974707317Z" level=info msg="Created container b3918d1baa01c6eee1e18e913b70130777045f1037ee97fb2baa52d82998123b: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8/dashboard-metrics-scraper" id=af62e3aa-97df-403e-9d63-c36881ad5628 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:28 no-preload-864613 crio[569]: time="2025-12-17T00:43:28.97562752Z" level=info msg="Starting container: b3918d1baa01c6eee1e18e913b70130777045f1037ee97fb2baa52d82998123b" id=81b74f8c-7f50-48fa-b894-1b79afdf7bce name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:43:28 no-preload-864613 crio[569]: time="2025-12-17T00:43:28.97825746Z" level=info msg="Started container" PID=1776 containerID=b3918d1baa01c6eee1e18e913b70130777045f1037ee97fb2baa52d82998123b description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8/dashboard-metrics-scraper id=81b74f8c-7f50-48fa-b894-1b79afdf7bce name=/runtime.v1.RuntimeService/StartContainer sandboxID=4cf20fadb03c18e861e72a26441d7f22bbc09f6a939f9398dc24b01fde7b1fef
	Dec 17 00:43:29 no-preload-864613 crio[569]: time="2025-12-17T00:43:29.093260759Z" level=info msg="Removing container: 0c06d21d12bd976afad02c72487a39a9b15d6c50af9d84c2208a4f7f406093b3" id=83ab800f-4de7-47d9-9556-434afaae9f72 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:43:29 no-preload-864613 crio[569]: time="2025-12-17T00:43:29.105250043Z" level=info msg="Removed container 0c06d21d12bd976afad02c72487a39a9b15d6c50af9d84c2208a4f7f406093b3: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8/dashboard-metrics-scraper" id=83ab800f-4de7-47d9-9556-434afaae9f72 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	b3918d1baa01c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago      Exited              dashboard-metrics-scraper   3                   4cf20fadb03c1       dashboard-metrics-scraper-867fb5f87b-7x9w8   kubernetes-dashboard
	5587cb8880517       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   37813881ab613       storage-provisioner                          kube-system
	e446ab4cc7b9a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   49 seconds ago      Running             kubernetes-dashboard        0                   1bef9be22dd39       kubernetes-dashboard-b84665fb8-nrnvc         kubernetes-dashboard
	fcf4367a5b6e0       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   5f2a786b41554       busybox                                      default
	992ca65d8279b       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           55 seconds ago      Running             coredns                     0                   a14cb4520bafa       coredns-7d764666f9-6ql6r                     kube-system
	c168143de300c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   e78ba32b0a233       kindnet-bpf4x                                kube-system
	40909a37f96e0       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           55 seconds ago      Running             kube-proxy                  0                   c804a6fee49c9       kube-proxy-2kddk                             kube-system
	b2af3f621d169       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   37813881ab613       storage-provisioner                          kube-system
	4b34ed74185a7       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           58 seconds ago      Running             etcd                        0                   2a60c5c2b3c41       etcd-no-preload-864613                       kube-system
	a590d671bfa52       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           58 seconds ago      Running             kube-controller-manager     0                   6a32a1d18eb1f       kube-controller-manager-no-preload-864613    kube-system
	a12cf220a059b       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           58 seconds ago      Running             kube-apiserver              0                   8c48c0ae1236d       kube-apiserver-no-preload-864613             kube-system
	d592a6ba05b7b       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           58 seconds ago      Running             kube-scheduler              0                   07890319ba75d       kube-scheduler-no-preload-864613             kube-system
	
	
	==> coredns [992ca65d8279bc176a68afbe577e49037ece762e1bbf7e625c4270f35d29840c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:36649 - 55421 "HINFO IN 6089023238814399908.9138146662419910988. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.486684094s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-864613
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-864613
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=no-preload-864613
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_41_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:41:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-864613
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:43:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:43:14 +0000   Wed, 17 Dec 2025 00:41:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:43:14 +0000   Wed, 17 Dec 2025 00:41:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:43:14 +0000   Wed, 17 Dec 2025 00:41:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 00:43:14 +0000   Wed, 17 Dec 2025 00:42:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-864613
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                213ec30f-ec82-463e-b257-cb730a6beffc
	  Boot ID:                    0e9cedc6-c46e-4354-b3d2-9272a8b33ae5
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-7d764666f9-6ql6r                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-no-preload-864613                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-bpf4x                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-no-preload-864613              250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-no-preload-864613     200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-2kddk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-no-preload-864613              100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-7x9w8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-nrnvc          0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  112s  node-controller  Node no-preload-864613 event: Registered Node no-preload-864613 in Controller
	  Normal  RegisteredNode  54s   node-controller  Node no-preload-864613 event: Registered Node no-preload-864613 in Controller
	
	
	==> dmesg <==
	[  +0.089382] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024236] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.864694] kauditd_printk_skb: 47 callbacks suppressed
	[Dec17 00:07] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.006904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +2.048755] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +4.030595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +8.447143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[ +16.382404] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000015] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[Dec17 00:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	
	
	==> etcd [4b34ed74185a723d1987fd893c6b89aa61e85dd77a4391ea83bf44f5d07a0931] <==
	{"level":"warn","ts":"2025-12-17T00:42:43.342110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.348455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.355265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.361835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.370175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.376401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.383383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.389872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.396778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.403419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.409689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.415915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.423355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.429682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.448345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.455164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.461543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.467552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.512877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39778","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T00:42:44.240752Z","caller":"traceutil/trace.go:172","msg":"trace[1621338994] transaction","detail":"{read_only:false; number_of_response:0; response_revision:455; }","duration":"134.659004ms","start":"2025-12-17T00:42:44.106073Z","end":"2025-12-17T00:42:44.240732Z","steps":["trace[1621338994] 'process raft request'  (duration: 134.588334ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:42:44.324677Z","caller":"traceutil/trace.go:172","msg":"trace[111072781] transaction","detail":"{read_only:false; response_revision:456; number_of_response:1; }","duration":"196.637518ms","start":"2025-12-17T00:42:44.127493Z","end":"2025-12-17T00:42:44.324131Z","steps":["trace[111072781] 'process raft request'  (duration: 196.041653ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:42:44.458577Z","caller":"traceutil/trace.go:172","msg":"trace[1487177061] linearizableReadLoop","detail":"{readStateIndex:482; appliedIndex:482; }","duration":"114.737118ms","start":"2025-12-17T00:42:44.343814Z","end":"2025-12-17T00:42:44.458551Z","steps":["trace[1487177061] 'read index received'  (duration: 114.728434ms)","trace[1487177061] 'applied index is now lower than readState.Index'  (duration: 7.552µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T00:42:44.459113Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.254181ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-system/system:persistent-volume-provisioner\" limit:1 ","response":"range_response_count:1 size:1137"}
	{"level":"info","ts":"2025-12-17T00:42:44.459198Z","caller":"traceutil/trace.go:172","msg":"trace[1363464694] range","detail":"{range_begin:/registry/roles/kube-system/system:persistent-volume-provisioner; range_end:; response_count:1; response_revision:456; }","duration":"115.374481ms","start":"2025-12-17T00:42:44.343811Z","end":"2025-12-17T00:42:44.459185Z","steps":["trace[1363464694] 'agreement among raft nodes before linearized reading'  (duration: 114.886803ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:42:44.459196Z","caller":"traceutil/trace.go:172","msg":"trace[1058360965] transaction","detail":"{read_only:false; number_of_response:0; response_revision:456; }","duration":"128.526127ms","start":"2025-12-17T00:42:44.330649Z","end":"2025-12-17T00:42:44.459175Z","steps":["trace[1058360965] 'process raft request'  (duration: 127.959824ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:43:41 up  1:26,  0 user,  load average: 3.85, 2.97, 2.01
	Linux no-preload-864613 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c168143de300c008ec57bfb6f217961739426196e44b7a3fe545f9f941260c0a] <==
	I1217 00:42:45.580744       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 00:42:45.581033       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1217 00:42:45.581193       1 main.go:148] setting mtu 1500 for CNI 
	I1217 00:42:45.581206       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 00:42:45.581226       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T00:42:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 00:42:45.783423       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 00:42:45.783528       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 00:42:45.783827       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 00:42:45.784182       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 00:42:46.180505       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 00:42:46.180542       1 metrics.go:72] Registering metrics
	I1217 00:42:46.181031       1 controller.go:711] "Syncing nftables rules"
	I1217 00:42:55.784238       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 00:42:55.784315       1 main.go:301] handling current node
	I1217 00:43:05.784229       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 00:43:05.784261       1 main.go:301] handling current node
	I1217 00:43:15.784362       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 00:43:15.784407       1 main.go:301] handling current node
	I1217 00:43:25.784127       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 00:43:25.784163       1 main.go:301] handling current node
	I1217 00:43:35.784226       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 00:43:35.784273       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a12cf220a059b218df62a14f9045f72149c1009f3507c8c36e206fdf43dc9d57] <==
	I1217 00:42:43.973554       1 aggregator.go:187] initial CRD sync complete...
	I1217 00:42:43.973563       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 00:42:43.973568       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 00:42:43.973574       1 cache.go:39] Caches are synced for autoregister controller
	I1217 00:42:43.973754       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:43.973814       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 00:42:43.973840       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 00:42:43.978133       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1217 00:42:43.989033       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 00:42:43.991136       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 00:42:43.999829       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:43.999858       1 policy_source.go:248] refreshing policies
	I1217 00:42:44.007785       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 00:42:44.462119       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 00:42:44.495773       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 00:42:44.522147       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 00:42:44.531859       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 00:42:44.539608       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 00:42:44.575251       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.36.252"}
	I1217 00:42:44.591178       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.215.84"}
	I1217 00:42:44.877153       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 00:42:47.567581       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 00:42:47.620217       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 00:42:47.766326       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 00:42:47.766327       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [a590d671bfa52ffb77f09298e606dd5a6cef506d25bf7c749bd516cf65fabaab] <==
	I1217 00:42:47.123216       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.123255       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1217 00:42:47.123270       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.123331       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-864613"
	I1217 00:42:47.123344       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.123347       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.123370       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1217 00:42:47.123411       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.123109       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.123485       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.123540       1 range_allocator.go:177] "Sending events to api server"
	I1217 00:42:47.123562       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1217 00:42:47.123567       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:42:47.123572       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.123587       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.123769       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.123572       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.124149       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.124479       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.125969       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:42:47.129596       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.223810       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.223832       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 00:42:47.223837       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 00:42:47.226881       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [40909a37f96e05409eb1c53f56f9585bf17482d70eae48d671deb9c28e8a104c] <==
	I1217 00:42:45.352572       1 server_linux.go:53] "Using iptables proxy"
	I1217 00:42:45.408208       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:42:45.508311       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:45.508353       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1217 00:42:45.508458       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 00:42:45.528077       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 00:42:45.528154       1 server_linux.go:136] "Using iptables Proxier"
	I1217 00:42:45.534808       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 00:42:45.535288       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1217 00:42:45.535551       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:42:45.536985       1 config.go:200] "Starting service config controller"
	I1217 00:42:45.537027       1 config.go:106] "Starting endpoint slice config controller"
	I1217 00:42:45.537039       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 00:42:45.537050       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 00:42:45.537059       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 00:42:45.537066       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 00:42:45.538735       1 config.go:309] "Starting node config controller"
	I1217 00:42:45.538762       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 00:42:45.538769       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 00:42:45.637766       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 00:42:45.637800       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 00:42:45.637643       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [d592a6ba05b7b5e2d53ffd9b29510a47348394c0b8faf29e99d49dce869dbeff] <==
	I1217 00:42:42.812414       1 serving.go:386] Generated self-signed cert in-memory
	W1217 00:42:43.899354       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 00:42:43.899486       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 00:42:43.899529       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 00:42:43.899578       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 00:42:43.923329       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1217 00:42:43.923350       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:42:43.924946       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:42:43.924969       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:42:43.925099       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 00:42:43.925130       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 00:42:44.025129       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 00:43:02 no-preload-864613 kubelet[709]: E1217 00:43:02.011964     709 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-864613" containerName="kube-scheduler"
	Dec 17 00:43:05 no-preload-864613 kubelet[709]: E1217 00:43:05.054442     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8" containerName="dashboard-metrics-scraper"
	Dec 17 00:43:05 no-preload-864613 kubelet[709]: I1217 00:43:05.054481     709 scope.go:122] "RemoveContainer" containerID="3028f2e3831cc335e16389f8a1488de719f1c76e83ababeed3ab223565c1cd4b"
	Dec 17 00:43:06 no-preload-864613 kubelet[709]: I1217 00:43:06.023791     709 scope.go:122] "RemoveContainer" containerID="3028f2e3831cc335e16389f8a1488de719f1c76e83ababeed3ab223565c1cd4b"
	Dec 17 00:43:06 no-preload-864613 kubelet[709]: E1217 00:43:06.024091     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8" containerName="dashboard-metrics-scraper"
	Dec 17 00:43:06 no-preload-864613 kubelet[709]: I1217 00:43:06.024122     709 scope.go:122] "RemoveContainer" containerID="0c06d21d12bd976afad02c72487a39a9b15d6c50af9d84c2208a4f7f406093b3"
	Dec 17 00:43:06 no-preload-864613 kubelet[709]: E1217 00:43:06.024312     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7x9w8_kubernetes-dashboard(f5201a5f-a7c0-43be-8bf9-89074d8c4c07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8" podUID="f5201a5f-a7c0-43be-8bf9-89074d8c4c07"
	Dec 17 00:43:15 no-preload-864613 kubelet[709]: E1217 00:43:15.054286     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8" containerName="dashboard-metrics-scraper"
	Dec 17 00:43:15 no-preload-864613 kubelet[709]: I1217 00:43:15.054344     709 scope.go:122] "RemoveContainer" containerID="0c06d21d12bd976afad02c72487a39a9b15d6c50af9d84c2208a4f7f406093b3"
	Dec 17 00:43:15 no-preload-864613 kubelet[709]: E1217 00:43:15.054815     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7x9w8_kubernetes-dashboard(f5201a5f-a7c0-43be-8bf9-89074d8c4c07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8" podUID="f5201a5f-a7c0-43be-8bf9-89074d8c4c07"
	Dec 17 00:43:16 no-preload-864613 kubelet[709]: I1217 00:43:16.051559     709 scope.go:122] "RemoveContainer" containerID="b2af3f621d169db6db7a50be514e4c022a2caa38e1084d576131e2475f388d5d"
	Dec 17 00:43:24 no-preload-864613 kubelet[709]: E1217 00:43:24.297704     709 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-6ql6r" containerName="coredns"
	Dec 17 00:43:28 no-preload-864613 kubelet[709]: E1217 00:43:28.925251     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8" containerName="dashboard-metrics-scraper"
	Dec 17 00:43:28 no-preload-864613 kubelet[709]: I1217 00:43:28.925312     709 scope.go:122] "RemoveContainer" containerID="0c06d21d12bd976afad02c72487a39a9b15d6c50af9d84c2208a4f7f406093b3"
	Dec 17 00:43:29 no-preload-864613 kubelet[709]: I1217 00:43:29.090837     709 scope.go:122] "RemoveContainer" containerID="0c06d21d12bd976afad02c72487a39a9b15d6c50af9d84c2208a4f7f406093b3"
	Dec 17 00:43:29 no-preload-864613 kubelet[709]: E1217 00:43:29.091340     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8" containerName="dashboard-metrics-scraper"
	Dec 17 00:43:29 no-preload-864613 kubelet[709]: I1217 00:43:29.091377     709 scope.go:122] "RemoveContainer" containerID="b3918d1baa01c6eee1e18e913b70130777045f1037ee97fb2baa52d82998123b"
	Dec 17 00:43:29 no-preload-864613 kubelet[709]: E1217 00:43:29.091557     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7x9w8_kubernetes-dashboard(f5201a5f-a7c0-43be-8bf9-89074d8c4c07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8" podUID="f5201a5f-a7c0-43be-8bf9-89074d8c4c07"
	Dec 17 00:43:35 no-preload-864613 kubelet[709]: E1217 00:43:35.054631     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8" containerName="dashboard-metrics-scraper"
	Dec 17 00:43:35 no-preload-864613 kubelet[709]: I1217 00:43:35.054671     709 scope.go:122] "RemoveContainer" containerID="b3918d1baa01c6eee1e18e913b70130777045f1037ee97fb2baa52d82998123b"
	Dec 17 00:43:35 no-preload-864613 kubelet[709]: E1217 00:43:35.055503     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7x9w8_kubernetes-dashboard(f5201a5f-a7c0-43be-8bf9-89074d8c4c07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8" podUID="f5201a5f-a7c0-43be-8bf9-89074d8c4c07"
	Dec 17 00:43:38 no-preload-864613 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 00:43:38 no-preload-864613 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 00:43:38 no-preload-864613 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:43:38 no-preload-864613 systemd[1]: kubelet.service: Consumed 1.808s CPU time.
	
	
	==> kubernetes-dashboard [e446ab4cc7b9aeb434956ba232e1f5873d98c50b20e63779da4e13870a2d7e30] <==
	2025/12/17 00:42:51 Using namespace: kubernetes-dashboard
	2025/12/17 00:42:51 Using in-cluster config to connect to apiserver
	2025/12/17 00:42:51 Using secret token for csrf signing
	2025/12/17 00:42:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 00:42:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 00:42:51 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/17 00:42:51 Generating JWE encryption key
	2025/12/17 00:42:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 00:42:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 00:42:51 Initializing JWE encryption key from synchronized object
	2025/12/17 00:42:51 Creating in-cluster Sidecar client
	2025/12/17 00:42:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 00:42:51 Serving insecurely on HTTP port: 9090
	2025/12/17 00:43:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 00:42:51 Starting overwatch
	
	
	==> storage-provisioner [5587cb88805177c695bf1ec86ad55d11c6c3c94174e7b9d4fd7505596629efb9] <==
	I1217 00:43:16.111152       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 00:43:16.122920       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 00:43:16.122970       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 00:43:16.125341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:19.580134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:23.840847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:27.440096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:30.495817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:33.517754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:33.539177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 00:43:33.539414       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 00:43:33.539549       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"77491952-ee3b-4988-94b4-88e7432dd743", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-864613_36396c6c-6d1c-457a-832c-40fa14511c43 became leader
	I1217 00:43:33.539651       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-864613_36396c6c-6d1c-457a-832c-40fa14511c43!
	W1217 00:43:33.541951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:33.547103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 00:43:33.640666       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-864613_36396c6c-6d1c-457a-832c-40fa14511c43!
	W1217 00:43:35.550803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:35.555972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:37.560482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:37.569293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:39.572854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:39.577434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:41.580694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:41.586126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b2af3f621d169db6db7a50be514e4c022a2caa38e1084d576131e2475f388d5d] <==
	I1217 00:42:45.318424       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 00:43:15.322214       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-864613 -n no-preload-864613
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-864613 -n no-preload-864613: exit status 2 (345.849008ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-864613 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-864613
helpers_test.go:244: (dbg) docker inspect no-preload-864613:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d31578a000b6bc0fd7f6db18dfc484bf6d5c523079339ecebac6aa5e2a0209d9",
	        "Created": "2025-12-17T00:41:22.987777185Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 290462,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:42:35.021551855Z",
	            "FinishedAt": "2025-12-17T00:42:34.023149179Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/d31578a000b6bc0fd7f6db18dfc484bf6d5c523079339ecebac6aa5e2a0209d9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d31578a000b6bc0fd7f6db18dfc484bf6d5c523079339ecebac6aa5e2a0209d9/hostname",
	        "HostsPath": "/var/lib/docker/containers/d31578a000b6bc0fd7f6db18dfc484bf6d5c523079339ecebac6aa5e2a0209d9/hosts",
	        "LogPath": "/var/lib/docker/containers/d31578a000b6bc0fd7f6db18dfc484bf6d5c523079339ecebac6aa5e2a0209d9/d31578a000b6bc0fd7f6db18dfc484bf6d5c523079339ecebac6aa5e2a0209d9-json.log",
	        "Name": "/no-preload-864613",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-864613:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-864613",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d31578a000b6bc0fd7f6db18dfc484bf6d5c523079339ecebac6aa5e2a0209d9",
	                "LowerDir": "/var/lib/docker/overlay2/f190c06e656d738f85b08c978b5e137744361ddd53ad1e7f79ae34378398bcd5-init/diff:/var/lib/docker/overlay2/594b812fd6d8db89dab322ea9e00d43dd555e9709fb5e6953e3873cce717392c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f190c06e656d738f85b08c978b5e137744361ddd53ad1e7f79ae34378398bcd5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f190c06e656d738f85b08c978b5e137744361ddd53ad1e7f79ae34378398bcd5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f190c06e656d738f85b08c978b5e137744361ddd53ad1e7f79ae34378398bcd5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-864613",
	                "Source": "/var/lib/docker/volumes/no-preload-864613/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-864613",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-864613",
	                "name.minikube.sigs.k8s.io": "no-preload-864613",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "04de50d436722f7da70f41b390f15d4b9049c0521ef600919ec9cadf780c4d6a",
	            "SandboxKey": "/var/run/docker/netns/04de50d43672",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-864613": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f576aec2f4916437744d456261513e7c90cb52cd053227c69a0accdc704e8654",
	                    "EndpointID": "2d800be25b6aaf3b556b6b5936efdd7e9844a5fab6e18e247c68373baf3154f4",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "b2:d9:36:e8:ad:bc",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-864613",
	                        "d31578a000b6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-864613 -n no-preload-864613
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-864613 -n no-preload-864613: exit status 2 (341.461052ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-864613 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-864613 logs -n 25: (1.593670333s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ pause   │ -p old-k8s-version-742860 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-864613 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p no-preload-864613 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:43 UTC │
	│ delete  │ -p old-k8s-version-742860                                                                                                                                                                                                                            │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ delete  │ -p old-k8s-version-742860                                                                                                                                                                                                                            │ old-k8s-version-742860       │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:42 UTC │
	│ start   │ -p newest-cni-653717 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-153232 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ stop    │ -p embed-certs-153232 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable metrics-server -p newest-cni-653717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-414413 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ stop    │ -p newest-cni-653717 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ stop    │ -p default-k8s-diff-port-414413 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable dashboard -p newest-cni-653717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p newest-cni-653717 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-153232 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p embed-certs-153232 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ image   │ newest-cni-653717 image list --format=json                                                                                                                                                                                                           │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ pause   │ -p newest-cni-653717 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-414413 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p default-k8s-diff-port-414413 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ delete  │ -p newest-cni-653717                                                                                                                                                                                                                                 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ delete  │ -p newest-cni-653717                                                                                                                                                                                                                                 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p auto-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-802249                  │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ image   │ no-preload-864613 image list --format=json                                                                                                                                                                                                           │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ pause   │ -p no-preload-864613 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:43:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:43:27.783899  307526 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:43:27.784188  307526 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:43:27.784198  307526 out.go:374] Setting ErrFile to fd 2...
	I1217 00:43:27.784205  307526 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:43:27.784420  307526 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:43:27.784980  307526 out.go:368] Setting JSON to false
	I1217 00:43:27.786356  307526 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5158,"bootTime":1765927050,"procs":319,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:43:27.786413  307526 start.go:143] virtualization: kvm guest
	I1217 00:43:27.792469  307526 out.go:179] * [auto-802249] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:43:27.794123  307526 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:43:27.794135  307526 notify.go:221] Checking for updates...
	I1217 00:43:27.796621  307526 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:43:27.798252  307526 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:43:27.800079  307526 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:43:27.801977  307526 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:43:27.803368  307526 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:43:27.805497  307526 config.go:182] Loaded profile config "default-k8s-diff-port-414413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:27.805617  307526 config.go:182] Loaded profile config "embed-certs-153232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:27.805718  307526 config.go:182] Loaded profile config "no-preload-864613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:43:27.805829  307526 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:43:27.834498  307526 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:43:27.834623  307526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:43:27.893922  307526 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 00:43:27.883816453 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:43:27.894056  307526 docker.go:319] overlay module found
	I1217 00:43:27.895774  307526 out.go:179] * Using the docker driver based on user configuration
	I1217 00:43:27.896798  307526 start.go:309] selected driver: docker
	I1217 00:43:27.896811  307526 start.go:927] validating driver "docker" against <nil>
	I1217 00:43:27.896822  307526 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:43:27.897491  307526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:43:27.960308  307526 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 00:43:27.949730665 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:43:27.960526  307526 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:43:27.960792  307526 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:43:27.962270  307526 out.go:179] * Using Docker driver with root privileges
	I1217 00:43:27.963165  307526 cni.go:84] Creating CNI manager for ""
	I1217 00:43:27.963225  307526 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:43:27.963236  307526 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 00:43:27.963300  307526 start.go:353] cluster config:
	{Name:auto-802249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-802249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1217 00:43:27.964435  307526 out.go:179] * Starting "auto-802249" primary control-plane node in "auto-802249" cluster
	I1217 00:43:27.965456  307526 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:43:27.966915  307526 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:43:27.967856  307526 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:43:27.967894  307526 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1217 00:43:27.967895  307526 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:43:27.967917  307526 cache.go:65] Caching tarball of preloaded images
	I1217 00:43:27.968039  307526 preload.go:238] Found /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 00:43:27.968054  307526 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 00:43:27.968179  307526 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/config.json ...
	I1217 00:43:27.968209  307526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/config.json: {Name:mk6a800c556cbb3f82d1d4ac2ca5b5edbc64dd1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:27.992177  307526 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:43:27.992201  307526 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:43:27.992224  307526 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:43:27.992259  307526 start.go:360] acquireMachinesLock for auto-802249: {Name:mkbccf009dcb23cd4ffd2a50ee9c72043c15e319 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:43:27.992359  307526 start.go:364] duration metric: took 79.06µs to acquireMachinesLock for "auto-802249"
	I1217 00:43:27.992387  307526 start.go:93] Provisioning new machine with config: &{Name:auto-802249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-802249 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:43:27.992483  307526 start.go:125] createHost starting for "" (driver="docker")
	I1217 00:43:23.316167  301437 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 00:43:23.321794  301437 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 00:43:23.323158  301437 api_server.go:141] control plane version: v1.34.2
	I1217 00:43:23.323187  301437 api_server.go:131] duration metric: took 1.0079864s to wait for apiserver health ...
	I1217 00:43:23.323198  301437 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:43:23.328161  301437 system_pods.go:59] 8 kube-system pods found
	I1217 00:43:23.328199  301437 system_pods.go:61] "coredns-66bc5c9577-vtspd" [aedf434b-e03e-479c-a8f2-199e28231d61] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:43:23.328211  301437 system_pods.go:61] "etcd-embed-certs-153232" [68a7a631-c79e-48d1-bd8d-1aafc2b61fcc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 00:43:23.328221  301437 system_pods.go:61] "kindnet-zffzt" [f06f5d73-eef9-4876-b0aa-862d58c18777] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 00:43:23.328232  301437 system_pods.go:61] "kube-apiserver-embed-certs-153232" [a0a484be-31c5-4471-b35c-7d059d9e1b00] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 00:43:23.328246  301437 system_pods.go:61] "kube-controller-manager-embed-certs-153232" [6fd01afb-bd8e-450b-9082-310ff94c5958] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 00:43:23.328263  301437 system_pods.go:61] "kube-proxy-82b8k" [68026912-6bcc-4aee-b806-51f967dc200f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 00:43:23.328275  301437 system_pods.go:61] "kube-scheduler-embed-certs-153232" [af854f70-8bef-44c5-ad64-197a3282d5c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 00:43:23.328288  301437 system_pods.go:61] "storage-provisioner" [ad4a1982-2da6-490d-bcba-f04782d2d9b8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:43:23.328296  301437 system_pods.go:74] duration metric: took 5.091218ms to wait for pod list to return data ...
	I1217 00:43:23.328306  301437 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:43:23.331475  301437 default_sa.go:45] found service account: "default"
	I1217 00:43:23.331498  301437 default_sa.go:55] duration metric: took 3.185353ms for default service account to be created ...
	I1217 00:43:23.331510  301437 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 00:43:23.335177  301437 system_pods.go:86] 8 kube-system pods found
	I1217 00:43:23.335208  301437 system_pods.go:89] "coredns-66bc5c9577-vtspd" [aedf434b-e03e-479c-a8f2-199e28231d61] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:43:23.335227  301437 system_pods.go:89] "etcd-embed-certs-153232" [68a7a631-c79e-48d1-bd8d-1aafc2b61fcc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 00:43:23.335238  301437 system_pods.go:89] "kindnet-zffzt" [f06f5d73-eef9-4876-b0aa-862d58c18777] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 00:43:23.335247  301437 system_pods.go:89] "kube-apiserver-embed-certs-153232" [a0a484be-31c5-4471-b35c-7d059d9e1b00] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 00:43:23.335255  301437 system_pods.go:89] "kube-controller-manager-embed-certs-153232" [6fd01afb-bd8e-450b-9082-310ff94c5958] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 00:43:23.335264  301437 system_pods.go:89] "kube-proxy-82b8k" [68026912-6bcc-4aee-b806-51f967dc200f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 00:43:23.335273  301437 system_pods.go:89] "kube-scheduler-embed-certs-153232" [af854f70-8bef-44c5-ad64-197a3282d5c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 00:43:23.335281  301437 system_pods.go:89] "storage-provisioner" [ad4a1982-2da6-490d-bcba-f04782d2d9b8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:43:23.335290  301437 system_pods.go:126] duration metric: took 3.772865ms to wait for k8s-apps to be running ...
	I1217 00:43:23.335300  301437 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 00:43:23.335346  301437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:43:23.351017  301437 system_svc.go:56] duration metric: took 15.681058ms WaitForService to wait for kubelet
	I1217 00:43:23.351048  301437 kubeadm.go:587] duration metric: took 3.059548515s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:43:23.351069  301437 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:43:23.353894  301437 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 00:43:23.353920  301437 node_conditions.go:123] node cpu capacity is 8
	I1217 00:43:23.353939  301437 node_conditions.go:105] duration metric: took 2.863427ms to run NodePressure ...
	I1217 00:43:23.353952  301437 start.go:242] waiting for startup goroutines ...
	I1217 00:43:23.353966  301437 start.go:247] waiting for cluster config update ...
	I1217 00:43:23.353983  301437 start.go:256] writing updated cluster config ...
	I1217 00:43:23.354303  301437 ssh_runner.go:195] Run: rm -f paused
	I1217 00:43:23.358200  301437 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:43:23.362406  301437 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vtspd" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 00:43:25.369130  301437 pod_ready.go:104] pod "coredns-66bc5c9577-vtspd" is not "Ready", error: <nil>
	W1217 00:43:27.871364  301437 pod_ready.go:104] pod "coredns-66bc5c9577-vtspd" is not "Ready", error: <nil>
	I1217 00:43:24.861152  306295 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-414413" ...
	I1217 00:43:24.861228  306295 cli_runner.go:164] Run: docker start default-k8s-diff-port-414413
	I1217 00:43:25.138682  306295 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Status}}
	I1217 00:43:25.161023  306295 kic.go:430] container "default-k8s-diff-port-414413" state is running.
	I1217 00:43:25.161708  306295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-414413
	I1217 00:43:25.195207  306295 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/config.json ...
	I1217 00:43:25.195443  306295 machine.go:94] provisionDockerMachine start ...
	I1217 00:43:25.195531  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:25.216531  306295 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:25.216872  306295 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1217 00:43:25.216891  306295 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:43:25.217780  306295 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54952->127.0.0.1:33103: read: connection reset by peer
	I1217 00:43:28.344965  306295 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-414413
	
	I1217 00:43:28.345032  306295 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-414413"
	I1217 00:43:28.345097  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:28.363417  306295 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:28.363640  306295 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1217 00:43:28.363653  306295 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-414413 && echo "default-k8s-diff-port-414413" | sudo tee /etc/hostname
	I1217 00:43:28.501277  306295 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-414413
	
	I1217 00:43:28.501360  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:28.520331  306295 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:28.520630  306295 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1217 00:43:28.520652  306295 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-414413' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-414413/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-414413' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:43:28.648245  306295 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:43:28.648275  306295 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:43:28.648294  306295 ubuntu.go:190] setting up certificates
	I1217 00:43:28.648307  306295 provision.go:84] configureAuth start
	I1217 00:43:28.648361  306295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-414413
	I1217 00:43:28.674762  306295 provision.go:143] copyHostCerts
	I1217 00:43:28.674833  306295 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:43:28.674851  306295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:43:28.674962  306295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:43:28.675161  306295 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:43:28.675175  306295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:43:28.675220  306295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:43:28.675301  306295 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:43:28.675313  306295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:43:28.675348  306295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:43:28.675415  306295 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-414413 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-414413 localhost minikube]
	I1217 00:43:28.708743  306295 provision.go:177] copyRemoteCerts
	I1217 00:43:28.708801  306295 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:43:28.708843  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:28.729865  306295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:43:28.845281  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:43:28.877349  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1217 00:43:28.902351  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 00:43:28.928580  306295 provision.go:87] duration metric: took 280.250857ms to configureAuth
	I1217 00:43:28.928613  306295 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:43:28.928801  306295 config.go:182] Loaded profile config "default-k8s-diff-port-414413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:28.929124  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:28.955949  306295 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:28.956300  306295 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1217 00:43:28.956325  306295 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:43:27.994357  307526 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 00:43:27.994631  307526 start.go:159] libmachine.API.Create for "auto-802249" (driver="docker")
	I1217 00:43:27.994662  307526 client.go:173] LocalClient.Create starting
	I1217 00:43:27.994709  307526 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem
	I1217 00:43:27.994740  307526 main.go:143] libmachine: Decoding PEM data...
	I1217 00:43:27.994760  307526 main.go:143] libmachine: Parsing certificate...
	I1217 00:43:27.994823  307526 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem
	I1217 00:43:27.994843  307526 main.go:143] libmachine: Decoding PEM data...
	I1217 00:43:27.994852  307526 main.go:143] libmachine: Parsing certificate...
	I1217 00:43:27.995194  307526 cli_runner.go:164] Run: docker network inspect auto-802249 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 00:43:28.013301  307526 cli_runner.go:211] docker network inspect auto-802249 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 00:43:28.013389  307526 network_create.go:284] running [docker network inspect auto-802249] to gather additional debugging logs...
	I1217 00:43:28.013413  307526 cli_runner.go:164] Run: docker network inspect auto-802249
	W1217 00:43:28.030618  307526 cli_runner.go:211] docker network inspect auto-802249 returned with exit code 1
	I1217 00:43:28.030651  307526 network_create.go:287] error running [docker network inspect auto-802249]: docker network inspect auto-802249: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-802249 not found
	I1217 00:43:28.030669  307526 network_create.go:289] output of [docker network inspect auto-802249]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-802249 not found
	
	** /stderr **
	I1217 00:43:28.030812  307526 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:43:28.050434  307526 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ffd1d738f01 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3d:52:75:47:82} reservation:<nil>}
	I1217 00:43:28.051191  307526 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-280edd437675 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:ae:02:b5:f9:a6} reservation:<nil>}
	I1217 00:43:28.051887  307526 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9f28d049043c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:3f:8e:e9:44:56} reservation:<nil>}
	I1217 00:43:28.052544  307526 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a57026acfc12 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:aa:e6:32:39:49:3b} reservation:<nil>}
	I1217 00:43:28.053095  307526 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-a0b8f164bc66 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ae:bf:0f:c2:a1:7a} reservation:<nil>}
	I1217 00:43:28.054051  307526 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f3a140}
	I1217 00:43:28.054075  307526 network_create.go:124] attempt to create docker network auto-802249 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1217 00:43:28.054125  307526 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-802249 auto-802249
	I1217 00:43:28.104462  307526 network_create.go:108] docker network auto-802249 192.168.94.0/24 created
	I1217 00:43:28.104500  307526 kic.go:121] calculated static IP "192.168.94.2" for the "auto-802249" container
	I1217 00:43:28.104582  307526 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 00:43:28.123300  307526 cli_runner.go:164] Run: docker volume create auto-802249 --label name.minikube.sigs.k8s.io=auto-802249 --label created_by.minikube.sigs.k8s.io=true
	I1217 00:43:28.142730  307526 oci.go:103] Successfully created a docker volume auto-802249
	I1217 00:43:28.142802  307526 cli_runner.go:164] Run: docker run --rm --name auto-802249-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-802249 --entrypoint /usr/bin/test -v auto-802249:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 00:43:28.898458  307526 oci.go:107] Successfully prepared a docker volume auto-802249
	I1217 00:43:28.898530  307526 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:43:28.898544  307526 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 00:43:28.898637  307526 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-802249:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	W1217 00:43:30.368852  301437 pod_ready.go:104] pod "coredns-66bc5c9577-vtspd" is not "Ready", error: <nil>
	W1217 00:43:32.867740  301437 pod_ready.go:104] pod "coredns-66bc5c9577-vtspd" is not "Ready", error: <nil>
	I1217 00:43:29.681399  306295 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:43:29.681431  306295 machine.go:97] duration metric: took 4.48596948s to provisionDockerMachine
	I1217 00:43:29.681447  306295 start.go:293] postStartSetup for "default-k8s-diff-port-414413" (driver="docker")
	I1217 00:43:29.681462  306295 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:43:29.681523  306295 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:43:29.681578  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:29.705905  306295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:43:29.812429  306295 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:43:29.816922  306295 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:43:29.816955  306295 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:43:29.816967  306295 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:43:29.817052  306295 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:43:29.817160  306295 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:43:29.817292  306295 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:43:29.827352  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:43:29.849430  306295 start.go:296] duration metric: took 167.967034ms for postStartSetup
	I1217 00:43:29.849524  306295 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:43:29.849572  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:29.873138  306295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:43:29.973222  306295 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:43:29.979217  306295 fix.go:56] duration metric: took 5.137583072s for fixHost
	I1217 00:43:29.979245  306295 start.go:83] releasing machines lock for "default-k8s-diff-port-414413", held for 5.137641613s
	I1217 00:43:29.979313  306295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-414413
	I1217 00:43:30.003017  306295 ssh_runner.go:195] Run: cat /version.json
	I1217 00:43:30.003088  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:30.003099  306295 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:43:30.003224  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:30.027827  306295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:43:30.027827  306295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:43:30.203372  306295 ssh_runner.go:195] Run: systemctl --version
	I1217 00:43:30.212836  306295 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:43:30.251227  306295 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:43:30.256929  306295 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:43:30.257011  306295 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:43:30.267178  306295 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:43:30.267206  306295 start.go:496] detecting cgroup driver to use...
	I1217 00:43:30.267236  306295 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:43:30.267279  306295 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:43:30.287349  306295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:43:30.303766  306295 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:43:30.303824  306295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:43:30.323548  306295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:43:30.340545  306295 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:43:30.459414  306295 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:43:30.576563  306295 docker.go:234] disabling docker service ...
	I1217 00:43:30.576631  306295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:43:30.597770  306295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:43:30.615190  306295 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:43:30.735085  306295 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:43:30.854647  306295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:43:30.871811  306295 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:43:30.892149  306295 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:43:30.892212  306295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:30.904058  306295 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:43:30.904190  306295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:30.917063  306295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:30.928165  306295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:30.940098  306295 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:43:30.951468  306295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:30.963044  306295 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:30.975338  306295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:30.988168  306295 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:43:30.998153  306295 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:43:31.008817  306295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:31.123938  306295 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:43:33.676337  306295 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.552363306s)
	I1217 00:43:33.676369  306295 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:43:33.676417  306295 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:43:33.680500  306295 start.go:564] Will wait 60s for crictl version
	I1217 00:43:33.680561  306295 ssh_runner.go:195] Run: which crictl
	I1217 00:43:33.684471  306295 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:43:33.709533  306295 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:43:33.709601  306295 ssh_runner.go:195] Run: crio --version
	I1217 00:43:33.738615  306295 ssh_runner.go:195] Run: crio --version
	I1217 00:43:33.775920  306295 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 00:43:33.777142  306295 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-414413 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:43:33.795353  306295 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 00:43:33.800123  306295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:43:33.811084  306295 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-414413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-414413 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:43:33.811227  306295 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:43:33.811279  306295 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:43:33.844811  306295 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:43:33.844831  306295 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:43:33.844887  306295 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:43:33.870923  306295 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:43:33.870948  306295 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:43:33.870957  306295 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.2 crio true true} ...
	I1217 00:43:33.871087  306295 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-414413 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-414413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:43:33.871165  306295 ssh_runner.go:195] Run: crio config
	I1217 00:43:33.925371  306295 cni.go:84] Creating CNI manager for ""
	I1217 00:43:33.925393  306295 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:43:33.925409  306295 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:43:33.925433  306295 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-414413 NodeName:default-k8s-diff-port-414413 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:43:33.925587  306295 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-414413"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:43:33.925651  306295 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 00:43:33.934954  306295 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:43:33.935047  306295 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:43:33.943677  306295 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1217 00:43:33.958880  306295 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 00:43:33.973470  306295 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1217 00:43:33.987696  306295 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:43:33.992165  306295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:43:34.001928  306295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:34.092409  306295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:43:34.108980  306295 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413 for IP: 192.168.76.2
	I1217 00:43:34.109018  306295 certs.go:195] generating shared ca certs ...
	I1217 00:43:34.109037  306295 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:34.109188  306295 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:43:34.109255  306295 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:43:34.109271  306295 certs.go:257] generating profile certs ...
	I1217 00:43:34.109424  306295 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/client.key
	I1217 00:43:34.110428  306295 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/apiserver.key.0797176d
	I1217 00:43:34.110528  306295 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/proxy-client.key
	I1217 00:43:34.110676  306295 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:43:34.110725  306295 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:43:34.110735  306295 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:43:34.110772  306295 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:43:34.110806  306295 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:43:34.111146  306295 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:43:34.111247  306295 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:43:34.112136  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:43:34.139905  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:43:34.163220  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:43:34.188725  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:43:34.228416  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 00:43:34.256716  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:43:34.280930  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:43:34.300601  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/default-k8s-diff-port-414413/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:43:34.321878  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:43:34.341964  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:43:34.361554  306295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:43:34.379883  306295 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:43:34.393397  306295 ssh_runner.go:195] Run: openssl version
	I1217 00:43:34.400563  306295 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:43:34.408550  306295 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:43:34.416706  306295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:43:34.420711  306295 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:43:34.420765  306295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	I1217 00:43:34.458970  306295 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:43:34.467164  306295 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:34.475301  306295 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:43:34.483588  306295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:34.487260  306295 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:34.487328  306295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:34.523803  306295 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:43:34.531254  306295 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16354.pem
	I1217 00:43:34.538816  306295 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16354.pem /etc/ssl/certs/16354.pem
	I1217 00:43:34.546250  306295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16354.pem
	I1217 00:43:34.550565  306295 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:13 /usr/share/ca-certificates/16354.pem
	I1217 00:43:34.550620  306295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16354.pem
	I1217 00:43:34.595335  306295 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:43:34.603361  306295 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:43:34.607349  306295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:43:34.645208  306295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:43:34.690905  306295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:43:34.737902  306295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:43:34.794952  306295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:43:34.844915  306295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:43:34.882722  306295 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-414413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-414413 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:43:34.882789  306295 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:43:34.882842  306295 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:43:34.917334  306295 cri.go:89] found id: "2a7b291de067a5044f406eaa0104c52261424e3730e6c2e4d38864b41943eddd"
	I1217 00:43:34.917357  306295 cri.go:89] found id: "4dcc77a289bba808ececc2d4f0efa70e966e843b2057d6de5ad0054d0be435c8"
	I1217 00:43:34.917363  306295 cri.go:89] found id: "ba3df04c6b3feaf2f234a1a9b098c1269d844cdbaf6531304d6ddd40b10820d5"
	I1217 00:43:34.917368  306295 cri.go:89] found id: "eecadcae34c3698337c66c6d6dbab2066993e3216b64d194344407552bc449b5"
	I1217 00:43:34.917373  306295 cri.go:89] found id: ""
	I1217 00:43:34.917413  306295 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 00:43:34.930607  306295 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:43:34Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:43:34.930674  306295 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:43:34.938821  306295 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:43:34.938837  306295 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:43:34.938875  306295 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:43:34.946910  306295 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:43:34.948063  306295 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-414413" does not appear in /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:43:34.948899  306295 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-12816/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-414413" cluster setting kubeconfig missing "default-k8s-diff-port-414413" context setting]
	I1217 00:43:34.950019  306295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:34.952215  306295 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:43:34.961195  306295 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1217 00:43:34.961224  306295 kubeadm.go:602] duration metric: took 22.380608ms to restartPrimaryControlPlane
	I1217 00:43:34.961233  306295 kubeadm.go:403] duration metric: took 78.517227ms to StartCluster
	I1217 00:43:34.961249  306295 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:34.961307  306295 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:43:34.963205  306295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:34.963466  306295 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:43:34.963687  306295 config.go:182] Loaded profile config "default-k8s-diff-port-414413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:34.963736  306295 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:43:34.963816  306295 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-414413"
	I1217 00:43:34.963837  306295 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-414413"
	W1217 00:43:34.963845  306295 addons.go:248] addon storage-provisioner should already be in state true
	I1217 00:43:34.963874  306295 host.go:66] Checking if "default-k8s-diff-port-414413" exists ...
	I1217 00:43:34.964351  306295 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Status}}
	I1217 00:43:34.964420  306295 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-414413"
	I1217 00:43:34.964441  306295 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-414413"
	W1217 00:43:34.964449  306295 addons.go:248] addon dashboard should already be in state true
	I1217 00:43:34.964470  306295 host.go:66] Checking if "default-k8s-diff-port-414413" exists ...
	I1217 00:43:34.964505  306295 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-414413"
	I1217 00:43:34.964525  306295 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-414413"
	I1217 00:43:34.964803  306295 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Status}}
	I1217 00:43:34.964969  306295 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Status}}
	I1217 00:43:34.967219  306295 out.go:179] * Verifying Kubernetes components...
	I1217 00:43:34.968657  306295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:34.993484  306295 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-414413"
	W1217 00:43:34.993507  306295 addons.go:248] addon default-storageclass should already be in state true
	I1217 00:43:34.993533  306295 host.go:66] Checking if "default-k8s-diff-port-414413" exists ...
	I1217 00:43:34.993963  306295 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Status}}
	I1217 00:43:34.995235  306295 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:43:34.995246  306295 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 00:43:34.996518  306295 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:43:34.997338  306295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:43:34.996560  306295 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 00:43:33.557934  307526 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-802249:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.659238848s)
	I1217 00:43:33.557969  307526 kic.go:203] duration metric: took 4.659422421s to extract preloaded images to volume ...
	W1217 00:43:33.558064  307526 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 00:43:33.558109  307526 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 00:43:33.558147  307526 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 00:43:33.614832  307526 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-802249 --name auto-802249 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-802249 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-802249 --network auto-802249 --ip 192.168.94.2 --volume auto-802249:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 00:43:33.899162  307526 cli_runner.go:164] Run: docker container inspect auto-802249 --format={{.State.Running}}
	I1217 00:43:33.918604  307526 cli_runner.go:164] Run: docker container inspect auto-802249 --format={{.State.Status}}
	I1217 00:43:33.939167  307526 cli_runner.go:164] Run: docker exec auto-802249 stat /var/lib/dpkg/alternatives/iptables
	I1217 00:43:33.991527  307526 oci.go:144] the created container "auto-802249" has a running status.
	I1217 00:43:33.991551  307526 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/auto-802249/id_rsa...
	I1217 00:43:34.093291  307526 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-12816/.minikube/machines/auto-802249/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 00:43:34.123193  307526 cli_runner.go:164] Run: docker container inspect auto-802249 --format={{.State.Status}}
	I1217 00:43:34.149038  307526 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 00:43:34.149065  307526 kic_runner.go:114] Args: [docker exec --privileged auto-802249 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 00:43:34.202194  307526 cli_runner.go:164] Run: docker container inspect auto-802249 --format={{.State.Status}}
	I1217 00:43:34.231461  307526 machine.go:94] provisionDockerMachine start ...
	I1217 00:43:34.231554  307526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-802249
	I1217 00:43:34.257565  307526 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:34.257906  307526 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1217 00:43:34.257925  307526 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:43:34.396715  307526 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-802249
	
	I1217 00:43:34.396747  307526 ubuntu.go:182] provisioning hostname "auto-802249"
	I1217 00:43:34.396806  307526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-802249
	I1217 00:43:34.416957  307526 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:34.417264  307526 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1217 00:43:34.417285  307526 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-802249 && echo "auto-802249" | sudo tee /etc/hostname
	I1217 00:43:34.556089  307526 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-802249
	
	I1217 00:43:34.556171  307526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-802249
	I1217 00:43:34.576140  307526 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:34.576361  307526 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1217 00:43:34.576379  307526 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-802249' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-802249/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-802249' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:43:34.703765  307526 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:43:34.703789  307526 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:43:34.703820  307526 ubuntu.go:190] setting up certificates
	I1217 00:43:34.703831  307526 provision.go:84] configureAuth start
	I1217 00:43:34.703873  307526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-802249
	I1217 00:43:34.725439  307526 provision.go:143] copyHostCerts
	I1217 00:43:34.725502  307526 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:43:34.725516  307526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:43:34.725581  307526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:43:34.725704  307526 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:43:34.725717  307526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:43:34.725761  307526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:43:34.725861  307526 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:43:34.725877  307526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:43:34.725916  307526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:43:34.726020  307526 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.auto-802249 san=[127.0.0.1 192.168.94.2 auto-802249 localhost minikube]
	I1217 00:43:34.751321  307526 provision.go:177] copyRemoteCerts
	I1217 00:43:34.751379  307526 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:43:34.751409  307526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-802249
	I1217 00:43:34.775156  307526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/auto-802249/id_rsa Username:docker}
	I1217 00:43:34.880372  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1217 00:43:34.902413  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:43:34.922726  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:43:34.940875  307526 provision.go:87] duration metric: took 237.020186ms to configureAuth
	I1217 00:43:34.940900  307526 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:43:34.941119  307526 config.go:182] Loaded profile config "auto-802249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:34.941239  307526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-802249
	I1217 00:43:34.962212  307526 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:34.962501  307526 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1217 00:43:34.962526  307526 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:43:35.300957  307526 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:43:35.300986  307526 machine.go:97] duration metric: took 1.069500633s to provisionDockerMachine
	I1217 00:43:35.301067  307526 client.go:176] duration metric: took 7.306395274s to LocalClient.Create
	I1217 00:43:35.301094  307526 start.go:167] duration metric: took 7.306462514s to libmachine.API.Create "auto-802249"
	I1217 00:43:35.301109  307526 start.go:293] postStartSetup for "auto-802249" (driver="docker")
	I1217 00:43:35.301122  307526 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:43:35.301201  307526 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:43:35.301250  307526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-802249
	I1217 00:43:35.323372  307526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/auto-802249/id_rsa Username:docker}
	I1217 00:43:35.421567  307526 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:43:35.425242  307526 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:43:35.425271  307526 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:43:35.425282  307526 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:43:35.425335  307526 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:43:35.425425  307526 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:43:35.425534  307526 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:43:35.433362  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:43:35.453775  307526 start.go:296] duration metric: took 152.651929ms for postStartSetup
	I1217 00:43:35.454172  307526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-802249
	I1217 00:43:35.472981  307526 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/config.json ...
	I1217 00:43:35.473360  307526 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:43:35.473412  307526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-802249
	I1217 00:43:35.491304  307526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/auto-802249/id_rsa Username:docker}
	I1217 00:43:35.581179  307526 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:43:35.585822  307526 start.go:128] duration metric: took 7.593323059s to createHost
	I1217 00:43:35.585847  307526 start.go:83] releasing machines lock for "auto-802249", held for 7.593474141s
	I1217 00:43:35.585914  307526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-802249
	I1217 00:43:35.610124  307526 ssh_runner.go:195] Run: cat /version.json
	I1217 00:43:35.610181  307526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-802249
	I1217 00:43:35.610180  307526 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:43:35.610256  307526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-802249
	I1217 00:43:35.630650  307526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/auto-802249/id_rsa Username:docker}
	I1217 00:43:35.632182  307526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/auto-802249/id_rsa Username:docker}
	I1217 00:43:35.799876  307526 ssh_runner.go:195] Run: systemctl --version
	I1217 00:43:35.806697  307526 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:43:35.847614  307526 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:43:35.853164  307526 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:43:35.853241  307526 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:43:35.884086  307526 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 00:43:35.884111  307526 start.go:496] detecting cgroup driver to use...
	I1217 00:43:35.884140  307526 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:43:35.884187  307526 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:43:35.904246  307526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:43:35.918195  307526 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:43:35.918257  307526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:43:35.940080  307526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:43:35.960661  307526 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:43:36.066109  307526 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:43:36.172766  307526 docker.go:234] disabling docker service ...
	I1217 00:43:36.172840  307526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:43:36.191820  307526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:43:36.205981  307526 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:43:36.304667  307526 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:43:36.396978  307526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:43:36.409464  307526 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:43:36.423426  307526 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:43:36.423496  307526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:36.433722  307526 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:43:36.433783  307526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:36.447105  307526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:36.456824  307526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:36.465936  307526 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:43:36.474179  307526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:36.482687  307526 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:36.498792  307526 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:36.509491  307526 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:43:36.520216  307526 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:43:36.531304  307526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:36.639178  307526 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:43:36.830624  307526 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:43:36.830701  307526 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:43:36.835398  307526 start.go:564] Will wait 60s for crictl version
	I1217 00:43:36.835461  307526 ssh_runner.go:195] Run: which crictl
	I1217 00:43:36.839344  307526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:43:36.867636  307526 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:43:36.867718  307526 ssh_runner.go:195] Run: crio --version
	I1217 00:43:36.898540  307526 ssh_runner.go:195] Run: crio --version
	I1217 00:43:36.935544  307526 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 00:43:34.997453  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:34.998447  306295 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 00:43:34.998463  306295 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 00:43:34.998517  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:35.029235  306295 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:43:35.029322  306295 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:43:35.029420  306295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:43:35.034809  306295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:43:35.039130  306295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:43:35.058900  306295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:43:35.131678  306295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:43:35.147894  306295 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-414413" to be "Ready" ...
	I1217 00:43:35.152120  306295 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 00:43:35.152140  306295 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 00:43:35.156835  306295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:43:35.167716  306295 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 00:43:35.167737  306295 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 00:43:35.175633  306295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:43:35.186148  306295 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 00:43:35.186174  306295 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 00:43:35.207150  306295 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 00:43:35.207173  306295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 00:43:35.224666  306295 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 00:43:35.224693  306295 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 00:43:35.244612  306295 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 00:43:35.244644  306295 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 00:43:35.260605  306295 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 00:43:35.260625  306295 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 00:43:35.276411  306295 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 00:43:35.276439  306295 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 00:43:35.289594  306295 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 00:43:35.289610  306295 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 00:43:35.304047  306295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 00:43:36.618697  306295 node_ready.go:49] node "default-k8s-diff-port-414413" is "Ready"
	I1217 00:43:36.618730  306295 node_ready.go:38] duration metric: took 1.470803516s for node "default-k8s-diff-port-414413" to be "Ready" ...
	I1217 00:43:36.618747  306295 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:43:36.618799  306295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:37.195015  306295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.038102082s)
	I1217 00:43:37.195017  306295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.019330508s)
	I1217 00:43:37.195167  306295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.891083549s)
	I1217 00:43:37.195199  306295 api_server.go:72] duration metric: took 2.231702348s to wait for apiserver process to appear ...
	I1217 00:43:37.195214  306295 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:43:37.195235  306295 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1217 00:43:37.196936  306295 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-414413 addons enable metrics-server
	
	I1217 00:43:37.200106  306295 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 00:43:37.200129  306295 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 00:43:37.203574  306295 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1217 00:43:36.937801  307526 cli_runner.go:164] Run: docker network inspect auto-802249 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:43:36.957458  307526 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1217 00:43:36.962277  307526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:43:36.973333  307526 kubeadm.go:884] updating cluster {Name:auto-802249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-802249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:43:36.973441  307526 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:43:36.973502  307526 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:43:37.009823  307526 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:43:37.009856  307526 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:43:37.009925  307526 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:43:37.040414  307526 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:43:37.040440  307526 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:43:37.040450  307526 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1217 00:43:37.040559  307526 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-802249 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:auto-802249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:43:37.040646  307526 ssh_runner.go:195] Run: crio config
	I1217 00:43:37.101505  307526 cni.go:84] Creating CNI manager for ""
	I1217 00:43:37.101530  307526 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:43:37.101546  307526 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:43:37.101567  307526 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-802249 NodeName:auto-802249 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:43:37.101690  307526 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-802249"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:43:37.101757  307526 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 00:43:37.110328  307526 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:43:37.110392  307526 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:43:37.119275  307526 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1217 00:43:37.132926  307526 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 00:43:37.147118  307526 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1217 00:43:37.160480  307526 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:43:37.164140  307526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:43:37.175007  307526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:37.280360  307526 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:43:37.305519  307526 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249 for IP: 192.168.94.2
	I1217 00:43:37.305542  307526 certs.go:195] generating shared ca certs ...
	I1217 00:43:37.305562  307526 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:37.305725  307526 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:43:37.305789  307526 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:43:37.305802  307526 certs.go:257] generating profile certs ...
	I1217 00:43:37.305867  307526 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/client.key
	I1217 00:43:37.305891  307526 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/client.crt with IP's: []
	I1217 00:43:37.344609  307526 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/client.crt ...
	I1217 00:43:37.344636  307526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/client.crt: {Name:mk5d53455946f112a1748aa6d9e7b0453a9bcfeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:37.344792  307526 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/client.key ...
	I1217 00:43:37.344806  307526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/client.key: {Name:mk4136a4b5cdab991b4548f1fda38b61fac41c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:37.344881  307526 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.key.9f1d7504
	I1217 00:43:37.344898  307526 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.crt.9f1d7504 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1217 00:43:37.381137  307526 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.crt.9f1d7504 ...
	I1217 00:43:37.381162  307526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.crt.9f1d7504: {Name:mk538b4009d544e7e5844aadc3ac0377c048b69f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:37.381316  307526 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.key.9f1d7504 ...
	I1217 00:43:37.381329  307526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.key.9f1d7504: {Name:mk59291695510cb10614430a790825e45e435105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:37.381397  307526 certs.go:382] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.crt.9f1d7504 -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.crt
	I1217 00:43:37.381479  307526 certs.go:386] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.key.9f1d7504 -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.key
	I1217 00:43:37.381550  307526 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/proxy-client.key
	I1217 00:43:37.381565  307526 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/proxy-client.crt with IP's: []
	I1217 00:43:37.467073  307526 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/proxy-client.crt ...
	I1217 00:43:37.467097  307526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/proxy-client.crt: {Name:mkde7eadaed81a1981a7c6ffa4efc6b06449235e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:37.467256  307526 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/proxy-client.key ...
	I1217 00:43:37.467268  307526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/proxy-client.key: {Name:mk6bb1a90a6259966890f42cf520f07ee481acb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:37.467429  307526 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:43:37.467469  307526 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:43:37.467479  307526 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:43:37.467512  307526 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:43:37.467538  307526 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:43:37.467561  307526 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:43:37.467600  307526 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:43:37.468184  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:43:37.486439  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:43:37.504781  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:43:37.525836  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:43:37.548869  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1217 00:43:37.574169  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 00:43:37.595474  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:43:37.621108  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/auto-802249/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:43:37.642499  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:43:37.668595  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:43:37.689079  307526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:43:37.709377  307526 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:43:37.724815  307526 ssh_runner.go:195] Run: openssl version
	I1217 00:43:37.732086  307526 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:43:37.741910  307526 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:43:37.751162  307526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:43:37.755733  307526 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:43:37.755789  307526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	W1217 00:43:34.868227  301437 pod_ready.go:104] pod "coredns-66bc5c9577-vtspd" is not "Ready", error: <nil>
	W1217 00:43:36.869681  301437 pod_ready.go:104] pod "coredns-66bc5c9577-vtspd" is not "Ready", error: <nil>
	I1217 00:43:37.206683  306295 addons.go:530] duration metric: took 2.242945901s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 00:43:37.696054  306295 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1217 00:43:37.701802  306295 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 00:43:37.701854  306295 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 00:43:38.196211  306295 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1217 00:43:38.201128  306295 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1217 00:43:38.202440  306295 api_server.go:141] control plane version: v1.34.2
	I1217 00:43:38.202469  306295 api_server.go:131] duration metric: took 1.00724732s to wait for apiserver health ...
	I1217 00:43:38.202480  306295 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:43:38.207407  306295 system_pods.go:59] 8 kube-system pods found
	I1217 00:43:38.207442  306295 system_pods.go:61] "coredns-66bc5c9577-v76f4" [1370bcd6-f828-4ed0-af58-d2d87c7044bd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:43:38.207453  306295 system_pods.go:61] "etcd-default-k8s-diff-port-414413" [286460a9-8a6c-4939-a2a0-0d5b31620d9a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 00:43:38.207463  306295 system_pods.go:61] "kindnet-hxhbf" [a4c2ed1b-ad48-484e-b779-4b93f3a72d0b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 00:43:38.207471  306295 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-414413" [aa792fc5-63c2-4287-802e-c99c70a9ab2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 00:43:38.207487  306295 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-414413" [e9a02305-5b73-4867-8605-48c8202cf5dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 00:43:38.207499  306295 system_pods.go:61] "kube-proxy-prlkw" [9a4571d0-7682-4838-aeb3-ccb4480157b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 00:43:38.207513  306295 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-414413" [a71da427-5b35-43f4-827b-62a96fdfda42] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 00:43:38.207524  306295 system_pods.go:61] "storage-provisioner" [0405b749-23a9-4449-90ac-59daf539647b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:43:38.207532  306295 system_pods.go:74] duration metric: took 5.045537ms to wait for pod list to return data ...
	I1217 00:43:38.207546  306295 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:43:38.210183  306295 default_sa.go:45] found service account: "default"
	I1217 00:43:38.210203  306295 default_sa.go:55] duration metric: took 2.637116ms for default service account to be created ...
	I1217 00:43:38.210213  306295 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 00:43:38.213607  306295 system_pods.go:86] 8 kube-system pods found
	I1217 00:43:38.213681  306295 system_pods.go:89] "coredns-66bc5c9577-v76f4" [1370bcd6-f828-4ed0-af58-d2d87c7044bd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:43:38.213702  306295 system_pods.go:89] "etcd-default-k8s-diff-port-414413" [286460a9-8a6c-4939-a2a0-0d5b31620d9a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 00:43:38.213714  306295 system_pods.go:89] "kindnet-hxhbf" [a4c2ed1b-ad48-484e-b779-4b93f3a72d0b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 00:43:38.213728  306295 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-414413" [aa792fc5-63c2-4287-802e-c99c70a9ab2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 00:43:38.213743  306295 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-414413" [e9a02305-5b73-4867-8605-48c8202cf5dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 00:43:38.213757  306295 system_pods.go:89] "kube-proxy-prlkw" [9a4571d0-7682-4838-aeb3-ccb4480157b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 00:43:38.213770  306295 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-414413" [a71da427-5b35-43f4-827b-62a96fdfda42] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 00:43:38.213782  306295 system_pods.go:89] "storage-provisioner" [0405b749-23a9-4449-90ac-59daf539647b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:43:38.213793  306295 system_pods.go:126] duration metric: took 3.573126ms to wait for k8s-apps to be running ...
	I1217 00:43:38.213806  306295 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 00:43:38.213863  306295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:43:38.229614  306295 system_svc.go:56] duration metric: took 15.79907ms WaitForService to wait for kubelet
	I1217 00:43:38.229647  306295 kubeadm.go:587] duration metric: took 3.266149884s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:43:38.229669  306295 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:43:38.236003  306295 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 00:43:38.236031  306295 node_conditions.go:123] node cpu capacity is 8
	I1217 00:43:38.236048  306295 node_conditions.go:105] duration metric: took 6.372563ms to run NodePressure ...
	I1217 00:43:38.236063  306295 start.go:242] waiting for startup goroutines ...
	I1217 00:43:38.236076  306295 start.go:247] waiting for cluster config update ...
	I1217 00:43:38.236092  306295 start.go:256] writing updated cluster config ...
	I1217 00:43:38.236347  306295 ssh_runner.go:195] Run: rm -f paused
	I1217 00:43:38.240301  306295 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:43:38.243629  306295 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-v76f4" in "kube-system" namespace to be "Ready" or be gone ...
	
	
	==> CRI-O <==
	Dec 17 00:43:05 no-preload-864613 crio[569]: time="2025-12-17T00:43:05.097951853Z" level=info msg="Started container" PID=1725 containerID=0c06d21d12bd976afad02c72487a39a9b15d6c50af9d84c2208a4f7f406093b3 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8/dashboard-metrics-scraper id=14c9eb41-5b3f-4f0c-bb07-676e6f4377dd name=/runtime.v1.RuntimeService/StartContainer sandboxID=4cf20fadb03c18e861e72a26441d7f22bbc09f6a939f9398dc24b01fde7b1fef
	Dec 17 00:43:06 no-preload-864613 crio[569]: time="2025-12-17T00:43:06.025010093Z" level=info msg="Removing container: 3028f2e3831cc335e16389f8a1488de719f1c76e83ababeed3ab223565c1cd4b" id=c02f5cec-d6b1-4bbd-85d1-009538d9562d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:43:06 no-preload-864613 crio[569]: time="2025-12-17T00:43:06.035125676Z" level=info msg="Removed container 3028f2e3831cc335e16389f8a1488de719f1c76e83ababeed3ab223565c1cd4b: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8/dashboard-metrics-scraper" id=c02f5cec-d6b1-4bbd-85d1-009538d9562d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:43:16 no-preload-864613 crio[569]: time="2025-12-17T00:43:16.052026324Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d110c010-d3e9-4942-8ca2-2d14e4d206a4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:16 no-preload-864613 crio[569]: time="2025-12-17T00:43:16.052975502Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=20ebbd59-74fd-4d63-93f2-28f990483020 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:16 no-preload-864613 crio[569]: time="2025-12-17T00:43:16.054097723Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=913b2b93-b430-481e-96bc-0e2a389538f7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:16 no-preload-864613 crio[569]: time="2025-12-17T00:43:16.054240702Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:16 no-preload-864613 crio[569]: time="2025-12-17T00:43:16.059350014Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:16 no-preload-864613 crio[569]: time="2025-12-17T00:43:16.059527486Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/748402e35e40c3bed5e388c284651ef045b6b1cdbab11d514aa77527819ddf63/merged/etc/passwd: no such file or directory"
	Dec 17 00:43:16 no-preload-864613 crio[569]: time="2025-12-17T00:43:16.059562211Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/748402e35e40c3bed5e388c284651ef045b6b1cdbab11d514aa77527819ddf63/merged/etc/group: no such file or directory"
	Dec 17 00:43:16 no-preload-864613 crio[569]: time="2025-12-17T00:43:16.05978406Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:16 no-preload-864613 crio[569]: time="2025-12-17T00:43:16.095102048Z" level=info msg="Created container 5587cb88805177c695bf1ec86ad55d11c6c3c94174e7b9d4fd7505596629efb9: kube-system/storage-provisioner/storage-provisioner" id=913b2b93-b430-481e-96bc-0e2a389538f7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:16 no-preload-864613 crio[569]: time="2025-12-17T00:43:16.095892065Z" level=info msg="Starting container: 5587cb88805177c695bf1ec86ad55d11c6c3c94174e7b9d4fd7505596629efb9" id=82e9ce79-32bf-43ec-8b3b-4c6c638162b6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:43:16 no-preload-864613 crio[569]: time="2025-12-17T00:43:16.098068439Z" level=info msg="Started container" PID=1744 containerID=5587cb88805177c695bf1ec86ad55d11c6c3c94174e7b9d4fd7505596629efb9 description=kube-system/storage-provisioner/storage-provisioner id=82e9ce79-32bf-43ec-8b3b-4c6c638162b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=37813881ab61336036a44898497a25042d9bc5770da5f59bafddaf05f62f319f
	Dec 17 00:43:28 no-preload-864613 crio[569]: time="2025-12-17T00:43:28.926099582Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6b52d515-4ccc-481e-81e1-90ea31f90d4a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:28 no-preload-864613 crio[569]: time="2025-12-17T00:43:28.927498192Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=34b19578-5cbb-47ec-b17a-c30090ec9982 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:28 no-preload-864613 crio[569]: time="2025-12-17T00:43:28.928631944Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8/dashboard-metrics-scraper" id=af62e3aa-97df-403e-9d63-c36881ad5628 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:28 no-preload-864613 crio[569]: time="2025-12-17T00:43:28.928902155Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:28 no-preload-864613 crio[569]: time="2025-12-17T00:43:28.938529294Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:28 no-preload-864613 crio[569]: time="2025-12-17T00:43:28.939292336Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:28 no-preload-864613 crio[569]: time="2025-12-17T00:43:28.974707317Z" level=info msg="Created container b3918d1baa01c6eee1e18e913b70130777045f1037ee97fb2baa52d82998123b: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8/dashboard-metrics-scraper" id=af62e3aa-97df-403e-9d63-c36881ad5628 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:28 no-preload-864613 crio[569]: time="2025-12-17T00:43:28.97562752Z" level=info msg="Starting container: b3918d1baa01c6eee1e18e913b70130777045f1037ee97fb2baa52d82998123b" id=81b74f8c-7f50-48fa-b894-1b79afdf7bce name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:43:28 no-preload-864613 crio[569]: time="2025-12-17T00:43:28.97825746Z" level=info msg="Started container" PID=1776 containerID=b3918d1baa01c6eee1e18e913b70130777045f1037ee97fb2baa52d82998123b description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8/dashboard-metrics-scraper id=81b74f8c-7f50-48fa-b894-1b79afdf7bce name=/runtime.v1.RuntimeService/StartContainer sandboxID=4cf20fadb03c18e861e72a26441d7f22bbc09f6a939f9398dc24b01fde7b1fef
	Dec 17 00:43:29 no-preload-864613 crio[569]: time="2025-12-17T00:43:29.093260759Z" level=info msg="Removing container: 0c06d21d12bd976afad02c72487a39a9b15d6c50af9d84c2208a4f7f406093b3" id=83ab800f-4de7-47d9-9556-434afaae9f72 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:43:29 no-preload-864613 crio[569]: time="2025-12-17T00:43:29.105250043Z" level=info msg="Removed container 0c06d21d12bd976afad02c72487a39a9b15d6c50af9d84c2208a4f7f406093b3: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8/dashboard-metrics-scraper" id=83ab800f-4de7-47d9-9556-434afaae9f72 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	b3918d1baa01c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago       Exited              dashboard-metrics-scraper   3                   4cf20fadb03c1       dashboard-metrics-scraper-867fb5f87b-7x9w8   kubernetes-dashboard
	5587cb8880517       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           27 seconds ago       Running             storage-provisioner         1                   37813881ab613       storage-provisioner                          kube-system
	e446ab4cc7b9a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   51 seconds ago       Running             kubernetes-dashboard        0                   1bef9be22dd39       kubernetes-dashboard-b84665fb8-nrnvc         kubernetes-dashboard
	fcf4367a5b6e0       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           58 seconds ago       Running             busybox                     1                   5f2a786b41554       busybox                                      default
	992ca65d8279b       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           58 seconds ago       Running             coredns                     0                   a14cb4520bafa       coredns-7d764666f9-6ql6r                     kube-system
	c168143de300c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           58 seconds ago       Running             kindnet-cni                 0                   e78ba32b0a233       kindnet-bpf4x                                kube-system
	40909a37f96e0       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           58 seconds ago       Running             kube-proxy                  0                   c804a6fee49c9       kube-proxy-2kddk                             kube-system
	b2af3f621d169       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Exited              storage-provisioner         0                   37813881ab613       storage-provisioner                          kube-system
	4b34ed74185a7       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           About a minute ago   Running             etcd                        0                   2a60c5c2b3c41       etcd-no-preload-864613                       kube-system
	a590d671bfa52       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           About a minute ago   Running             kube-controller-manager     0                   6a32a1d18eb1f       kube-controller-manager-no-preload-864613    kube-system
	a12cf220a059b       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           About a minute ago   Running             kube-apiserver              0                   8c48c0ae1236d       kube-apiserver-no-preload-864613             kube-system
	d592a6ba05b7b       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           About a minute ago   Running             kube-scheduler              0                   07890319ba75d       kube-scheduler-no-preload-864613             kube-system
	
	
	==> coredns [992ca65d8279bc176a68afbe577e49037ece762e1bbf7e625c4270f35d29840c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:36649 - 55421 "HINFO IN 6089023238814399908.9138146662419910988. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.486684094s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-864613
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-864613
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=no-preload-864613
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_41_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:41:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-864613
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:43:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:43:14 +0000   Wed, 17 Dec 2025 00:41:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:43:14 +0000   Wed, 17 Dec 2025 00:41:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:43:14 +0000   Wed, 17 Dec 2025 00:41:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 00:43:14 +0000   Wed, 17 Dec 2025 00:42:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-864613
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                213ec30f-ec82-463e-b257-cb730a6beffc
	  Boot ID:                    0e9cedc6-c46e-4354-b3d2-9272a8b33ae5
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-7d764666f9-6ql6r                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     113s
	  kube-system                 etcd-no-preload-864613                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-bpf4x                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-no-preload-864613              250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-no-preload-864613     200m (2%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-2kddk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-no-preload-864613              100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-7x9w8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-nrnvc          0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  114s  node-controller  Node no-preload-864613 event: Registered Node no-preload-864613 in Controller
	  Normal  RegisteredNode  56s   node-controller  Node no-preload-864613 event: Registered Node no-preload-864613 in Controller
	
	
	==> dmesg <==
	[  +0.089382] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024236] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.864694] kauditd_printk_skb: 47 callbacks suppressed
	[Dec17 00:07] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.006904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +2.048755] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +4.030595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +8.447143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[ +16.382404] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000015] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[Dec17 00:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	
	
	==> etcd [4b34ed74185a723d1987fd893c6b89aa61e85dd77a4391ea83bf44f5d07a0931] <==
	{"level":"warn","ts":"2025-12-17T00:42:43.342110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.348455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.355265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.361835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.370175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.376401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.383383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.389872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.396778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.403419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.409689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.415915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.423355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.429682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.448345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.455164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.461543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.467552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:42:43.512877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39778","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T00:42:44.240752Z","caller":"traceutil/trace.go:172","msg":"trace[1621338994] transaction","detail":"{read_only:false; number_of_response:0; response_revision:455; }","duration":"134.659004ms","start":"2025-12-17T00:42:44.106073Z","end":"2025-12-17T00:42:44.240732Z","steps":["trace[1621338994] 'process raft request'  (duration: 134.588334ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:42:44.324677Z","caller":"traceutil/trace.go:172","msg":"trace[111072781] transaction","detail":"{read_only:false; response_revision:456; number_of_response:1; }","duration":"196.637518ms","start":"2025-12-17T00:42:44.127493Z","end":"2025-12-17T00:42:44.324131Z","steps":["trace[111072781] 'process raft request'  (duration: 196.041653ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:42:44.458577Z","caller":"traceutil/trace.go:172","msg":"trace[1487177061] linearizableReadLoop","detail":"{readStateIndex:482; appliedIndex:482; }","duration":"114.737118ms","start":"2025-12-17T00:42:44.343814Z","end":"2025-12-17T00:42:44.458551Z","steps":["trace[1487177061] 'read index received'  (duration: 114.728434ms)","trace[1487177061] 'applied index is now lower than readState.Index'  (duration: 7.552µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T00:42:44.459113Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.254181ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-system/system:persistent-volume-provisioner\" limit:1 ","response":"range_response_count:1 size:1137"}
	{"level":"info","ts":"2025-12-17T00:42:44.459198Z","caller":"traceutil/trace.go:172","msg":"trace[1363464694] range","detail":"{range_begin:/registry/roles/kube-system/system:persistent-volume-provisioner; range_end:; response_count:1; response_revision:456; }","duration":"115.374481ms","start":"2025-12-17T00:42:44.343811Z","end":"2025-12-17T00:42:44.459185Z","steps":["trace[1363464694] 'agreement among raft nodes before linearized reading'  (duration: 114.886803ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:42:44.459196Z","caller":"traceutil/trace.go:172","msg":"trace[1058360965] transaction","detail":"{read_only:false; number_of_response:0; response_revision:456; }","duration":"128.526127ms","start":"2025-12-17T00:42:44.330649Z","end":"2025-12-17T00:42:44.459175Z","steps":["trace[1058360965] 'process raft request'  (duration: 127.959824ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:43:43 up  1:26,  0 user,  load average: 3.70, 2.96, 2.01
	Linux no-preload-864613 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c168143de300c008ec57bfb6f217961739426196e44b7a3fe545f9f941260c0a] <==
	I1217 00:42:45.580744       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 00:42:45.581033       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1217 00:42:45.581193       1 main.go:148] setting mtu 1500 for CNI 
	I1217 00:42:45.581206       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 00:42:45.581226       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T00:42:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 00:42:45.783423       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 00:42:45.783528       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 00:42:45.783827       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 00:42:45.784182       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 00:42:46.180505       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 00:42:46.180542       1 metrics.go:72] Registering metrics
	I1217 00:42:46.181031       1 controller.go:711] "Syncing nftables rules"
	I1217 00:42:55.784238       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 00:42:55.784315       1 main.go:301] handling current node
	I1217 00:43:05.784229       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 00:43:05.784261       1 main.go:301] handling current node
	I1217 00:43:15.784362       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 00:43:15.784407       1 main.go:301] handling current node
	I1217 00:43:25.784127       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 00:43:25.784163       1 main.go:301] handling current node
	I1217 00:43:35.784226       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 00:43:35.784273       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a12cf220a059b218df62a14f9045f72149c1009f3507c8c36e206fdf43dc9d57] <==
	I1217 00:42:43.973554       1 aggregator.go:187] initial CRD sync complete...
	I1217 00:42:43.973563       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 00:42:43.973568       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 00:42:43.973574       1 cache.go:39] Caches are synced for autoregister controller
	I1217 00:42:43.973754       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:43.973814       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 00:42:43.973840       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 00:42:43.978133       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1217 00:42:43.989033       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 00:42:43.991136       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 00:42:43.999829       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:43.999858       1 policy_source.go:248] refreshing policies
	I1217 00:42:44.007785       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 00:42:44.462119       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 00:42:44.495773       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 00:42:44.522147       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 00:42:44.531859       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 00:42:44.539608       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 00:42:44.575251       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.36.252"}
	I1217 00:42:44.591178       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.215.84"}
	I1217 00:42:44.877153       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 00:42:47.567581       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 00:42:47.620217       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 00:42:47.766326       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 00:42:47.766327       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [a590d671bfa52ffb77f09298e606dd5a6cef506d25bf7c749bd516cf65fabaab] <==
	I1217 00:42:47.123216       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.123255       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1217 00:42:47.123270       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.123331       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-864613"
	I1217 00:42:47.123344       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.123347       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.123370       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1217 00:42:47.123411       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.123109       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.123485       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.123540       1 range_allocator.go:177] "Sending events to api server"
	I1217 00:42:47.123562       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1217 00:42:47.123567       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:42:47.123572       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.123587       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.123769       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.123572       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.124149       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.124479       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.125969       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:42:47.129596       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.223810       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:47.223832       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 00:42:47.223837       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 00:42:47.226881       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [40909a37f96e05409eb1c53f56f9585bf17482d70eae48d671deb9c28e8a104c] <==
	I1217 00:42:45.352572       1 server_linux.go:53] "Using iptables proxy"
	I1217 00:42:45.408208       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:42:45.508311       1 shared_informer.go:377] "Caches are synced"
	I1217 00:42:45.508353       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1217 00:42:45.508458       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 00:42:45.528077       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 00:42:45.528154       1 server_linux.go:136] "Using iptables Proxier"
	I1217 00:42:45.534808       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 00:42:45.535288       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1217 00:42:45.535551       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:42:45.536985       1 config.go:200] "Starting service config controller"
	I1217 00:42:45.537027       1 config.go:106] "Starting endpoint slice config controller"
	I1217 00:42:45.537039       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 00:42:45.537050       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 00:42:45.537059       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 00:42:45.537066       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 00:42:45.538735       1 config.go:309] "Starting node config controller"
	I1217 00:42:45.538762       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 00:42:45.538769       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 00:42:45.637766       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 00:42:45.637800       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 00:42:45.637643       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [d592a6ba05b7b5e2d53ffd9b29510a47348394c0b8faf29e99d49dce869dbeff] <==
	I1217 00:42:42.812414       1 serving.go:386] Generated self-signed cert in-memory
	W1217 00:42:43.899354       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 00:42:43.899486       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 00:42:43.899529       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 00:42:43.899578       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 00:42:43.923329       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1217 00:42:43.923350       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:42:43.924946       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:42:43.924969       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:42:43.925099       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 00:42:43.925130       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 00:42:44.025129       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 00:43:02 no-preload-864613 kubelet[709]: E1217 00:43:02.011964     709 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-864613" containerName="kube-scheduler"
	Dec 17 00:43:05 no-preload-864613 kubelet[709]: E1217 00:43:05.054442     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8" containerName="dashboard-metrics-scraper"
	Dec 17 00:43:05 no-preload-864613 kubelet[709]: I1217 00:43:05.054481     709 scope.go:122] "RemoveContainer" containerID="3028f2e3831cc335e16389f8a1488de719f1c76e83ababeed3ab223565c1cd4b"
	Dec 17 00:43:06 no-preload-864613 kubelet[709]: I1217 00:43:06.023791     709 scope.go:122] "RemoveContainer" containerID="3028f2e3831cc335e16389f8a1488de719f1c76e83ababeed3ab223565c1cd4b"
	Dec 17 00:43:06 no-preload-864613 kubelet[709]: E1217 00:43:06.024091     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8" containerName="dashboard-metrics-scraper"
	Dec 17 00:43:06 no-preload-864613 kubelet[709]: I1217 00:43:06.024122     709 scope.go:122] "RemoveContainer" containerID="0c06d21d12bd976afad02c72487a39a9b15d6c50af9d84c2208a4f7f406093b3"
	Dec 17 00:43:06 no-preload-864613 kubelet[709]: E1217 00:43:06.024312     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7x9w8_kubernetes-dashboard(f5201a5f-a7c0-43be-8bf9-89074d8c4c07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8" podUID="f5201a5f-a7c0-43be-8bf9-89074d8c4c07"
	Dec 17 00:43:15 no-preload-864613 kubelet[709]: E1217 00:43:15.054286     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8" containerName="dashboard-metrics-scraper"
	Dec 17 00:43:15 no-preload-864613 kubelet[709]: I1217 00:43:15.054344     709 scope.go:122] "RemoveContainer" containerID="0c06d21d12bd976afad02c72487a39a9b15d6c50af9d84c2208a4f7f406093b3"
	Dec 17 00:43:15 no-preload-864613 kubelet[709]: E1217 00:43:15.054815     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7x9w8_kubernetes-dashboard(f5201a5f-a7c0-43be-8bf9-89074d8c4c07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8" podUID="f5201a5f-a7c0-43be-8bf9-89074d8c4c07"
	Dec 17 00:43:16 no-preload-864613 kubelet[709]: I1217 00:43:16.051559     709 scope.go:122] "RemoveContainer" containerID="b2af3f621d169db6db7a50be514e4c022a2caa38e1084d576131e2475f388d5d"
	Dec 17 00:43:24 no-preload-864613 kubelet[709]: E1217 00:43:24.297704     709 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-6ql6r" containerName="coredns"
	Dec 17 00:43:28 no-preload-864613 kubelet[709]: E1217 00:43:28.925251     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8" containerName="dashboard-metrics-scraper"
	Dec 17 00:43:28 no-preload-864613 kubelet[709]: I1217 00:43:28.925312     709 scope.go:122] "RemoveContainer" containerID="0c06d21d12bd976afad02c72487a39a9b15d6c50af9d84c2208a4f7f406093b3"
	Dec 17 00:43:29 no-preload-864613 kubelet[709]: I1217 00:43:29.090837     709 scope.go:122] "RemoveContainer" containerID="0c06d21d12bd976afad02c72487a39a9b15d6c50af9d84c2208a4f7f406093b3"
	Dec 17 00:43:29 no-preload-864613 kubelet[709]: E1217 00:43:29.091340     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8" containerName="dashboard-metrics-scraper"
	Dec 17 00:43:29 no-preload-864613 kubelet[709]: I1217 00:43:29.091377     709 scope.go:122] "RemoveContainer" containerID="b3918d1baa01c6eee1e18e913b70130777045f1037ee97fb2baa52d82998123b"
	Dec 17 00:43:29 no-preload-864613 kubelet[709]: E1217 00:43:29.091557     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7x9w8_kubernetes-dashboard(f5201a5f-a7c0-43be-8bf9-89074d8c4c07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8" podUID="f5201a5f-a7c0-43be-8bf9-89074d8c4c07"
	Dec 17 00:43:35 no-preload-864613 kubelet[709]: E1217 00:43:35.054631     709 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8" containerName="dashboard-metrics-scraper"
	Dec 17 00:43:35 no-preload-864613 kubelet[709]: I1217 00:43:35.054671     709 scope.go:122] "RemoveContainer" containerID="b3918d1baa01c6eee1e18e913b70130777045f1037ee97fb2baa52d82998123b"
	Dec 17 00:43:35 no-preload-864613 kubelet[709]: E1217 00:43:35.055503     709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7x9w8_kubernetes-dashboard(f5201a5f-a7c0-43be-8bf9-89074d8c4c07)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7x9w8" podUID="f5201a5f-a7c0-43be-8bf9-89074d8c4c07"
	Dec 17 00:43:38 no-preload-864613 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 00:43:38 no-preload-864613 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 00:43:38 no-preload-864613 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:43:38 no-preload-864613 systemd[1]: kubelet.service: Consumed 1.808s CPU time.
	
	
	==> kubernetes-dashboard [e446ab4cc7b9aeb434956ba232e1f5873d98c50b20e63779da4e13870a2d7e30] <==
	2025/12/17 00:42:51 Starting overwatch
	2025/12/17 00:42:51 Using namespace: kubernetes-dashboard
	2025/12/17 00:42:51 Using in-cluster config to connect to apiserver
	2025/12/17 00:42:51 Using secret token for csrf signing
	2025/12/17 00:42:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 00:42:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 00:42:51 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/17 00:42:51 Generating JWE encryption key
	2025/12/17 00:42:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 00:42:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 00:42:51 Initializing JWE encryption key from synchronized object
	2025/12/17 00:42:51 Creating in-cluster Sidecar client
	2025/12/17 00:42:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 00:42:51 Serving insecurely on HTTP port: 9090
	2025/12/17 00:43:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [5587cb88805177c695bf1ec86ad55d11c6c3c94174e7b9d4fd7505596629efb9] <==
	I1217 00:43:16.122920       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 00:43:16.122970       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 00:43:16.125341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:19.580134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:23.840847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:27.440096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:30.495817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:33.517754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:33.539177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 00:43:33.539414       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 00:43:33.539549       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"77491952-ee3b-4988-94b4-88e7432dd743", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-864613_36396c6c-6d1c-457a-832c-40fa14511c43 became leader
	I1217 00:43:33.539651       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-864613_36396c6c-6d1c-457a-832c-40fa14511c43!
	W1217 00:43:33.541951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:33.547103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 00:43:33.640666       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-864613_36396c6c-6d1c-457a-832c-40fa14511c43!
	W1217 00:43:35.550803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:35.555972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:37.560482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:37.569293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:39.572854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:39.577434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:41.580694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:41.586126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:43.590344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:43.594456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b2af3f621d169db6db7a50be514e4c022a2caa38e1084d576131e2475f388d5d] <==
	I1217 00:42:45.318424       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 00:43:15.322214       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-864613 -n no-preload-864613
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-864613 -n no-preload-864613: exit status 2 (433.172289ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-864613 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-153232 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-153232 --alsologtostderr -v=1: exit status 80 (1.784072697s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-153232 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:44:12.038889  317093 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:44:12.039172  317093 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:44:12.039182  317093 out.go:374] Setting ErrFile to fd 2...
	I1217 00:44:12.039187  317093 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:44:12.039400  317093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:44:12.039629  317093 out.go:368] Setting JSON to false
	I1217 00:44:12.039647  317093 mustload.go:66] Loading cluster: embed-certs-153232
	I1217 00:44:12.039977  317093 config.go:182] Loaded profile config "embed-certs-153232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:44:12.040414  317093 cli_runner.go:164] Run: docker container inspect embed-certs-153232 --format={{.State.Status}}
	I1217 00:44:12.061279  317093 host.go:66] Checking if "embed-certs-153232" exists ...
	I1217 00:44:12.061598  317093 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:44:12.130140  317093 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-17 00:44:12.118540746 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:44:12.130979  317093 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-153232 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 00:44:12.132544  317093 out.go:179] * Pausing node embed-certs-153232 ... 
	I1217 00:44:12.133564  317093 host.go:66] Checking if "embed-certs-153232" exists ...
	I1217 00:44:12.133846  317093 ssh_runner.go:195] Run: systemctl --version
	I1217 00:44:12.133886  317093 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-153232
	I1217 00:44:12.154679  317093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/embed-certs-153232/id_rsa Username:docker}
	I1217 00:44:12.255686  317093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:44:12.274271  317093 pause.go:52] kubelet running: true
	I1217 00:44:12.274366  317093 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 00:44:12.512093  317093 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 00:44:12.512186  317093 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 00:44:12.586803  317093 cri.go:89] found id: "4aa28ef7b86e0ac2c8860e0731143889f5585d08d1c8e3092e5fdbae502d7645"
	I1217 00:44:12.586830  317093 cri.go:89] found id: "932d916c8f226125fbf4338249dcdb35a5f6d7adf40a1fb61934237d9cba3980"
	I1217 00:44:12.586837  317093 cri.go:89] found id: "9e12fba8024abfa61f00f5fe053cd5d50fccf8f0b0cd949bcff836ef6212ea59"
	I1217 00:44:12.586845  317093 cri.go:89] found id: "e8ac4e7470f9424e1e7541237e9c9cdc16aa75232ea66c1cdc71939466c64b0d"
	I1217 00:44:12.586850  317093 cri.go:89] found id: "6301e99f54ccbfcaa7a5dde58d324c165f0fe60d9d03ed0b9fa97c55700ac344"
	I1217 00:44:12.586854  317093 cri.go:89] found id: "dadde2213b8a894873343cf42602c1bedb001a3311bd9672a69d0fa4a07d9786"
	I1217 00:44:12.586857  317093 cri.go:89] found id: "117e1e782a79833091ca7f1a9da4be915158517d3d54c5674f3b4e0875f18cce"
	I1217 00:44:12.586860  317093 cri.go:89] found id: "f3a000d40d6d7ebc54a27ecd08dc5aa3b530c6e66b7327ec3ec09941fca5d2ce"
	I1217 00:44:12.586862  317093 cri.go:89] found id: "a770bc08061f975f567cb7fb7cec6883ec6d5215d19863d7ddb2cc0049571d8b"
	I1217 00:44:12.586874  317093 cri.go:89] found id: "3d2c3aa6013510ed343b70dda91e1024e94192c440d8cb7aa743b80510c1917f"
	I1217 00:44:12.586877  317093 cri.go:89] found id: "d4b900c582c6abc6c4d8c623e5365ca20e2f76c0980168c5652e9f834c43de48"
	I1217 00:44:12.586881  317093 cri.go:89] found id: ""
	I1217 00:44:12.586926  317093 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:44:12.599721  317093 retry.go:31] will retry after 131.476824ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:44:12Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:44:12.732112  317093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:44:12.745343  317093 pause.go:52] kubelet running: false
	I1217 00:44:12.745408  317093 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 00:44:12.900576  317093 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 00:44:12.900684  317093 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 00:44:12.968486  317093 cri.go:89] found id: "4aa28ef7b86e0ac2c8860e0731143889f5585d08d1c8e3092e5fdbae502d7645"
	I1217 00:44:12.968505  317093 cri.go:89] found id: "932d916c8f226125fbf4338249dcdb35a5f6d7adf40a1fb61934237d9cba3980"
	I1217 00:44:12.968510  317093 cri.go:89] found id: "9e12fba8024abfa61f00f5fe053cd5d50fccf8f0b0cd949bcff836ef6212ea59"
	I1217 00:44:12.968513  317093 cri.go:89] found id: "e8ac4e7470f9424e1e7541237e9c9cdc16aa75232ea66c1cdc71939466c64b0d"
	I1217 00:44:12.968516  317093 cri.go:89] found id: "6301e99f54ccbfcaa7a5dde58d324c165f0fe60d9d03ed0b9fa97c55700ac344"
	I1217 00:44:12.968519  317093 cri.go:89] found id: "dadde2213b8a894873343cf42602c1bedb001a3311bd9672a69d0fa4a07d9786"
	I1217 00:44:12.968522  317093 cri.go:89] found id: "117e1e782a79833091ca7f1a9da4be915158517d3d54c5674f3b4e0875f18cce"
	I1217 00:44:12.968524  317093 cri.go:89] found id: "f3a000d40d6d7ebc54a27ecd08dc5aa3b530c6e66b7327ec3ec09941fca5d2ce"
	I1217 00:44:12.968527  317093 cri.go:89] found id: "a770bc08061f975f567cb7fb7cec6883ec6d5215d19863d7ddb2cc0049571d8b"
	I1217 00:44:12.968533  317093 cri.go:89] found id: "3d2c3aa6013510ed343b70dda91e1024e94192c440d8cb7aa743b80510c1917f"
	I1217 00:44:12.968535  317093 cri.go:89] found id: "d4b900c582c6abc6c4d8c623e5365ca20e2f76c0980168c5652e9f834c43de48"
	I1217 00:44:12.968538  317093 cri.go:89] found id: ""
	I1217 00:44:12.968571  317093 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:44:12.980289  317093 retry.go:31] will retry after 522.079816ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:44:12Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:44:13.502970  317093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:44:13.516522  317093 pause.go:52] kubelet running: false
	I1217 00:44:13.516569  317093 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 00:44:13.677747  317093 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 00:44:13.677831  317093 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 00:44:13.745883  317093 cri.go:89] found id: "4aa28ef7b86e0ac2c8860e0731143889f5585d08d1c8e3092e5fdbae502d7645"
	I1217 00:44:13.745908  317093 cri.go:89] found id: "932d916c8f226125fbf4338249dcdb35a5f6d7adf40a1fb61934237d9cba3980"
	I1217 00:44:13.745916  317093 cri.go:89] found id: "9e12fba8024abfa61f00f5fe053cd5d50fccf8f0b0cd949bcff836ef6212ea59"
	I1217 00:44:13.745924  317093 cri.go:89] found id: "e8ac4e7470f9424e1e7541237e9c9cdc16aa75232ea66c1cdc71939466c64b0d"
	I1217 00:44:13.745929  317093 cri.go:89] found id: "6301e99f54ccbfcaa7a5dde58d324c165f0fe60d9d03ed0b9fa97c55700ac344"
	I1217 00:44:13.745934  317093 cri.go:89] found id: "dadde2213b8a894873343cf42602c1bedb001a3311bd9672a69d0fa4a07d9786"
	I1217 00:44:13.745939  317093 cri.go:89] found id: "117e1e782a79833091ca7f1a9da4be915158517d3d54c5674f3b4e0875f18cce"
	I1217 00:44:13.745944  317093 cri.go:89] found id: "f3a000d40d6d7ebc54a27ecd08dc5aa3b530c6e66b7327ec3ec09941fca5d2ce"
	I1217 00:44:13.745949  317093 cri.go:89] found id: "a770bc08061f975f567cb7fb7cec6883ec6d5215d19863d7ddb2cc0049571d8b"
	I1217 00:44:13.745958  317093 cri.go:89] found id: "3d2c3aa6013510ed343b70dda91e1024e94192c440d8cb7aa743b80510c1917f"
	I1217 00:44:13.745964  317093 cri.go:89] found id: "d4b900c582c6abc6c4d8c623e5365ca20e2f76c0980168c5652e9f834c43de48"
	I1217 00:44:13.745970  317093 cri.go:89] found id: ""
	I1217 00:44:13.746045  317093 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:44:13.759687  317093 out.go:203] 
	W1217 00:44:13.760774  317093 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:44:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:44:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:44:13.760798  317093 out.go:285] * 
	* 
	W1217 00:44:13.764795  317093 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:44:13.765903  317093 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-153232 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-153232
helpers_test.go:244: (dbg) docker inspect embed-certs-153232:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0643874d17495c0c32f8432b12d57b11dd7085dfaf7906608f3a8753637c5a15",
	        "Created": "2025-12-17T00:42:07.386583477Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301693,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:43:13.27345219Z",
	            "FinishedAt": "2025-12-17T00:43:12.344975251Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/0643874d17495c0c32f8432b12d57b11dd7085dfaf7906608f3a8753637c5a15/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0643874d17495c0c32f8432b12d57b11dd7085dfaf7906608f3a8753637c5a15/hostname",
	        "HostsPath": "/var/lib/docker/containers/0643874d17495c0c32f8432b12d57b11dd7085dfaf7906608f3a8753637c5a15/hosts",
	        "LogPath": "/var/lib/docker/containers/0643874d17495c0c32f8432b12d57b11dd7085dfaf7906608f3a8753637c5a15/0643874d17495c0c32f8432b12d57b11dd7085dfaf7906608f3a8753637c5a15-json.log",
	        "Name": "/embed-certs-153232",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-153232:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-153232",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0643874d17495c0c32f8432b12d57b11dd7085dfaf7906608f3a8753637c5a15",
	                "LowerDir": "/var/lib/docker/overlay2/75e64cb888fdc80983d39325faeb17b16c0afd2693d7425dc490c93491959bb6-init/diff:/var/lib/docker/overlay2/594b812fd6d8db89dab322ea9e00d43dd555e9709fb5e6953e3873cce717392c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/75e64cb888fdc80983d39325faeb17b16c0afd2693d7425dc490c93491959bb6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/75e64cb888fdc80983d39325faeb17b16c0afd2693d7425dc490c93491959bb6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/75e64cb888fdc80983d39325faeb17b16c0afd2693d7425dc490c93491959bb6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-153232",
	                "Source": "/var/lib/docker/volumes/embed-certs-153232/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-153232",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-153232",
	                "name.minikube.sigs.k8s.io": "embed-certs-153232",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f2158cc6cbcda501008c4446adb8778717225fc54b69ef09a4e4e5b039a35f2e",
	            "SandboxKey": "/var/run/docker/netns/f2158cc6cbcd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-153232": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a0b8f164bc66d742f404a41cb692119204b3085d963265276bc535b43e9a9723",
	                    "EndpointID": "53af4ffc444007d7d41c8bf0e4ff9ef93828286575e91871e7c05279a13be2ba",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "22:ff:af:62:fd:29",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-153232",
	                        "0643874d1749"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-153232 -n embed-certs-153232
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-153232 -n embed-certs-153232: exit status 2 (315.952513ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-153232 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-153232 logs -n 25: (1.042185604s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p newest-cni-653717 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-153232 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ stop    │ -p embed-certs-153232 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable metrics-server -p newest-cni-653717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-414413 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ stop    │ -p newest-cni-653717 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ stop    │ -p default-k8s-diff-port-414413 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable dashboard -p newest-cni-653717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p newest-cni-653717 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-153232 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p embed-certs-153232 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:44 UTC │
	│ image   │ newest-cni-653717 image list --format=json                                                                                                                                                                                                           │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ pause   │ -p newest-cni-653717 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-414413 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p default-k8s-diff-port-414413 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ delete  │ -p newest-cni-653717                                                                                                                                                                                                                                 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ delete  │ -p newest-cni-653717                                                                                                                                                                                                                                 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p auto-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-802249                  │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ image   │ no-preload-864613 image list --format=json                                                                                                                                                                                                           │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ pause   │ -p no-preload-864613 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ delete  │ -p no-preload-864613                                                                                                                                                                                                                                 │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ delete  │ -p no-preload-864613                                                                                                                                                                                                                                 │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p kindnet-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                             │ kindnet-802249               │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ image   │ embed-certs-153232 image list --format=json                                                                                                                                                                                                          │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:44 UTC │ 17 Dec 25 00:44 UTC │
	│ pause   │ -p embed-certs-153232 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:44 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:43:48
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:43:48.769795  313838 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:43:48.770047  313838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:43:48.770055  313838 out.go:374] Setting ErrFile to fd 2...
	I1217 00:43:48.770060  313838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:43:48.770244  313838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:43:48.770680  313838 out.go:368] Setting JSON to false
	I1217 00:43:48.771902  313838 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5179,"bootTime":1765927050,"procs":344,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:43:48.771962  313838 start.go:143] virtualization: kvm guest
	I1217 00:43:48.774031  313838 out.go:179] * [kindnet-802249] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:43:48.775928  313838 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:43:48.775950  313838 notify.go:221] Checking for updates...
	I1217 00:43:48.778239  313838 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:43:48.779636  313838 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:43:48.780730  313838 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:43:48.781754  313838 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:43:48.783004  313838 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:43:48.784614  313838 config.go:182] Loaded profile config "auto-802249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:48.784740  313838 config.go:182] Loaded profile config "default-k8s-diff-port-414413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:48.784875  313838 config.go:182] Loaded profile config "embed-certs-153232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:48.785025  313838 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:43:48.811205  313838 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:43:48.811415  313838 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:43:48.879696  313838 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 00:43:48.868563585 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:43:48.879821  313838 docker.go:319] overlay module found
	I1217 00:43:48.881467  313838 out.go:179] * Using the docker driver based on user configuration
	I1217 00:43:48.882479  313838 start.go:309] selected driver: docker
	I1217 00:43:48.882497  313838 start.go:927] validating driver "docker" against <nil>
	I1217 00:43:48.882510  313838 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:43:48.883253  313838 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:43:48.942567  313838 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 00:43:48.932031348 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:43:48.942777  313838 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:43:48.943106  313838 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:43:48.944723  313838 out.go:179] * Using Docker driver with root privileges
	I1217 00:43:48.945842  313838 cni.go:84] Creating CNI manager for "kindnet"
	I1217 00:43:48.945864  313838 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 00:43:48.945941  313838 start.go:353] cluster config:
	{Name:kindnet-802249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-802249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:43:48.947336  313838 out.go:179] * Starting "kindnet-802249" primary control-plane node in "kindnet-802249" cluster
	I1217 00:43:48.948365  313838 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:43:48.949518  313838 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:43:48.950629  313838 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:43:48.950677  313838 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1217 00:43:48.950691  313838 cache.go:65] Caching tarball of preloaded images
	I1217 00:43:48.950727  313838 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:43:48.950805  313838 preload.go:238] Found /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 00:43:48.950818  313838 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 00:43:48.950936  313838 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/config.json ...
	I1217 00:43:48.950964  313838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/config.json: {Name:mk0bb9291022a4703579df82ea9711e36a66c4f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:48.972665  313838 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:43:48.972692  313838 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:43:48.972710  313838 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:43:48.972742  313838 start.go:360] acquireMachinesLock for kindnet-802249: {Name:mkbd43bf8515ac51e94479f10c515da678a2d966 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:43:48.972870  313838 start.go:364] duration metric: took 107.153µs to acquireMachinesLock for "kindnet-802249"
	I1217 00:43:48.972901  313838 start.go:93] Provisioning new machine with config: &{Name:kindnet-802249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-802249 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:43:48.972974  313838 start.go:125] createHost starting for "" (driver="docker")
	W1217 00:43:44.760180  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	W1217 00:43:47.273533  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	I1217 00:43:49.784559  307526 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1217 00:43:49.784642  307526 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:43:49.784765  307526 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:43:49.784840  307526 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 00:43:49.784881  307526 kubeadm.go:319] OS: Linux
	I1217 00:43:49.784955  307526 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:43:49.785040  307526 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:43:49.785108  307526 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:43:49.785187  307526 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:43:49.785261  307526 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:43:49.785314  307526 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:43:49.785358  307526 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:43:49.785396  307526 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 00:43:49.785478  307526 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:43:49.785573  307526 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:43:49.785652  307526 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:43:49.785710  307526 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:43:49.787201  307526 out.go:252]   - Generating certificates and keys ...
	I1217 00:43:49.787276  307526 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:43:49.787364  307526 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:43:49.787429  307526 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 00:43:49.787475  307526 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 00:43:49.787528  307526 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 00:43:49.787597  307526 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 00:43:49.787664  307526 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 00:43:49.787784  307526 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-802249 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 00:43:49.787884  307526 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 00:43:49.788045  307526 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-802249 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 00:43:49.788144  307526 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 00:43:49.788236  307526 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 00:43:49.788301  307526 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 00:43:49.788362  307526 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:43:49.788412  307526 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:43:49.788457  307526 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:43:49.788500  307526 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:43:49.788552  307526 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:43:49.788596  307526 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:43:49.788678  307526 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:43:49.788763  307526 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:43:49.790074  307526 out.go:252]   - Booting up control plane ...
	I1217 00:43:49.790183  307526 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:43:49.790288  307526 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:43:49.790399  307526 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:43:49.790508  307526 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:43:49.790639  307526 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:43:49.790791  307526 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:43:49.790914  307526 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:43:49.790971  307526 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:43:49.791150  307526 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:43:49.791278  307526 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:43:49.791370  307526 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.199672ms
	I1217 00:43:49.791497  307526 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 00:43:49.791613  307526 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1217 00:43:49.791733  307526 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 00:43:49.791846  307526 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 00:43:49.791961  307526 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.963819553s
	I1217 00:43:49.792139  307526 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.065065905s
	I1217 00:43:49.792238  307526 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001333371s
	I1217 00:43:49.792362  307526 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 00:43:49.792527  307526 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 00:43:49.792584  307526 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 00:43:49.792827  307526 kubeadm.go:319] [mark-control-plane] Marking the node auto-802249 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 00:43:49.792880  307526 kubeadm.go:319] [bootstrap-token] Using token: hewufi.jsvybo2i82bav9vd
	I1217 00:43:49.794192  307526 out.go:252]   - Configuring RBAC rules ...
	I1217 00:43:49.794372  307526 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 00:43:49.794485  307526 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 00:43:49.794694  307526 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 00:43:49.794899  307526 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 00:43:49.795139  307526 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 00:43:49.795280  307526 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 00:43:49.795467  307526 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 00:43:49.795531  307526 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 00:43:49.795601  307526 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 00:43:49.795610  307526 kubeadm.go:319] 
	I1217 00:43:49.795698  307526 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 00:43:49.795708  307526 kubeadm.go:319] 
	I1217 00:43:49.795824  307526 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 00:43:49.795834  307526 kubeadm.go:319] 
	I1217 00:43:49.795869  307526 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 00:43:49.795971  307526 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 00:43:49.796067  307526 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 00:43:49.796080  307526 kubeadm.go:319] 
	I1217 00:43:49.796159  307526 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 00:43:49.796168  307526 kubeadm.go:319] 
	I1217 00:43:49.796236  307526 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 00:43:49.796246  307526 kubeadm.go:319] 
	I1217 00:43:49.796323  307526 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 00:43:49.796430  307526 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 00:43:49.796545  307526 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 00:43:49.796556  307526 kubeadm.go:319] 
	I1217 00:43:49.796684  307526 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 00:43:49.796794  307526 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 00:43:49.796806  307526 kubeadm.go:319] 
	I1217 00:43:49.796928  307526 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token hewufi.jsvybo2i82bav9vd \
	I1217 00:43:49.797083  307526 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a7c34974519aee4953e03245da076d7a2eba06e40135880a85806e2dab303fa1 \
	I1217 00:43:49.797128  307526 kubeadm.go:319] 	--control-plane 
	I1217 00:43:49.797137  307526 kubeadm.go:319] 
	I1217 00:43:49.797280  307526 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 00:43:49.797299  307526 kubeadm.go:319] 
	I1217 00:43:49.797372  307526 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hewufi.jsvybo2i82bav9vd \
	I1217 00:43:49.797481  307526 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a7c34974519aee4953e03245da076d7a2eba06e40135880a85806e2dab303fa1 
	I1217 00:43:49.797510  307526 cni.go:84] Creating CNI manager for ""
	I1217 00:43:49.797519  307526 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:43:49.799489  307526 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 00:43:49.800531  307526 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 00:43:49.805278  307526 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1217 00:43:49.805294  307526 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1217 00:43:49.820140  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 00:43:50.071276  307526 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 00:43:50.071364  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:50.071389  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-802249 minikube.k8s.io/updated_at=2025_12_17T00_43_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1 minikube.k8s.io/name=auto-802249 minikube.k8s.io/primary=true
	I1217 00:43:50.083649  307526 ops.go:34] apiserver oom_adj: -16
	I1217 00:43:50.166785  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:50.667140  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:51.167625  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:51.667172  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:52.167662  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:52.667540  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1217 00:43:48.367908  301437 pod_ready.go:104] pod "coredns-66bc5c9577-vtspd" is not "Ready", error: <nil>
	W1217 00:43:50.368775  301437 pod_ready.go:104] pod "coredns-66bc5c9577-vtspd" is not "Ready", error: <nil>
	W1217 00:43:52.877476  301437 pod_ready.go:104] pod "coredns-66bc5c9577-vtspd" is not "Ready", error: <nil>
	I1217 00:43:48.974767  313838 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 00:43:48.975076  313838 start.go:159] libmachine.API.Create for "kindnet-802249" (driver="docker")
	I1217 00:43:48.975111  313838 client.go:173] LocalClient.Create starting
	I1217 00:43:48.975180  313838 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem
	I1217 00:43:48.975219  313838 main.go:143] libmachine: Decoding PEM data...
	I1217 00:43:48.975242  313838 main.go:143] libmachine: Parsing certificate...
	I1217 00:43:48.975326  313838 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem
	I1217 00:43:48.975352  313838 main.go:143] libmachine: Decoding PEM data...
	I1217 00:43:48.975383  313838 main.go:143] libmachine: Parsing certificate...
	I1217 00:43:48.975744  313838 cli_runner.go:164] Run: docker network inspect kindnet-802249 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 00:43:48.994403  313838 cli_runner.go:211] docker network inspect kindnet-802249 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 00:43:48.994483  313838 network_create.go:284] running [docker network inspect kindnet-802249] to gather additional debugging logs...
	I1217 00:43:48.994509  313838 cli_runner.go:164] Run: docker network inspect kindnet-802249
	W1217 00:43:49.017898  313838 cli_runner.go:211] docker network inspect kindnet-802249 returned with exit code 1
	I1217 00:43:49.017935  313838 network_create.go:287] error running [docker network inspect kindnet-802249]: docker network inspect kindnet-802249: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-802249 not found
	I1217 00:43:49.017964  313838 network_create.go:289] output of [docker network inspect kindnet-802249]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-802249 not found
	
	** /stderr **
	I1217 00:43:49.018078  313838 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:43:49.043478  313838 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ffd1d738f01 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3d:52:75:47:82} reservation:<nil>}
	I1217 00:43:49.044499  313838 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-280edd437675 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:ae:02:b5:f9:a6} reservation:<nil>}
	I1217 00:43:49.045635  313838 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9f28d049043c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:3f:8e:e9:44:56} reservation:<nil>}
	I1217 00:43:49.046382  313838 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a57026acfc12 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:aa:e6:32:39:49:3b} reservation:<nil>}
	I1217 00:43:49.047004  313838 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-a0b8f164bc66 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ae:bf:0f:c2:a1:7a} reservation:<nil>}
	I1217 00:43:49.047816  313838 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-2bf3b4bee687 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ce:d1:20:00:3e:43} reservation:<nil>}
	I1217 00:43:49.048945  313838 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f9aa00}
	I1217 00:43:49.048968  313838 network_create.go:124] attempt to create docker network kindnet-802249 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1217 00:43:49.049135  313838 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-802249 kindnet-802249
	I1217 00:43:49.098433  313838 network_create.go:108] docker network kindnet-802249 192.168.103.0/24 created
	I1217 00:43:49.098466  313838 kic.go:121] calculated static IP "192.168.103.2" for the "kindnet-802249" container
	I1217 00:43:49.098566  313838 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 00:43:49.116785  313838 cli_runner.go:164] Run: docker volume create kindnet-802249 --label name.minikube.sigs.k8s.io=kindnet-802249 --label created_by.minikube.sigs.k8s.io=true
	I1217 00:43:49.135373  313838 oci.go:103] Successfully created a docker volume kindnet-802249
	I1217 00:43:49.135457  313838 cli_runner.go:164] Run: docker run --rm --name kindnet-802249-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-802249 --entrypoint /usr/bin/test -v kindnet-802249:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 00:43:49.745394  313838 oci.go:107] Successfully prepared a docker volume kindnet-802249
	I1217 00:43:49.745467  313838 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:43:49.745481  313838 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 00:43:49.745541  313838 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-802249:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 00:43:53.730169  313838 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-802249:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.98455596s)
	I1217 00:43:53.730206  313838 kic.go:203] duration metric: took 3.984721772s to extract preloaded images to volume ...
	W1217 00:43:53.730319  313838 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 00:43:53.730365  313838 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 00:43:53.730427  313838 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	W1217 00:43:49.749486  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	W1217 00:43:51.749796  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	W1217 00:43:53.752706  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	I1217 00:43:53.167719  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:53.667433  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:54.167168  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:54.667183  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:54.749050  307526 kubeadm.go:1114] duration metric: took 4.677757562s to wait for elevateKubeSystemPrivileges
	I1217 00:43:54.749081  307526 kubeadm.go:403] duration metric: took 16.763588732s to StartCluster
	I1217 00:43:54.749106  307526 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:54.749184  307526 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:43:54.751400  307526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:54.751685  307526 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 00:43:54.751696  307526 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:43:54.751759  307526 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:43:54.751867  307526 addons.go:70] Setting storage-provisioner=true in profile "auto-802249"
	I1217 00:43:54.751878  307526 config.go:182] Loaded profile config "auto-802249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:54.751892  307526 addons.go:239] Setting addon storage-provisioner=true in "auto-802249"
	I1217 00:43:54.751924  307526 host.go:66] Checking if "auto-802249" exists ...
	I1217 00:43:54.751907  307526 addons.go:70] Setting default-storageclass=true in profile "auto-802249"
	I1217 00:43:54.751979  307526 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-802249"
	I1217 00:43:54.752435  307526 cli_runner.go:164] Run: docker container inspect auto-802249 --format={{.State.Status}}
	I1217 00:43:54.752612  307526 cli_runner.go:164] Run: docker container inspect auto-802249 --format={{.State.Status}}
	I1217 00:43:54.753065  307526 out.go:179] * Verifying Kubernetes components...
	I1217 00:43:54.754299  307526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:54.779505  307526 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:43:54.780431  307526 addons.go:239] Setting addon default-storageclass=true in "auto-802249"
	I1217 00:43:54.780480  307526 host.go:66] Checking if "auto-802249" exists ...
	I1217 00:43:54.780947  307526 cli_runner.go:164] Run: docker container inspect auto-802249 --format={{.State.Status}}
	I1217 00:43:54.782107  307526 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:43:54.782129  307526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:43:54.782180  307526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-802249
	I1217 00:43:54.811663  307526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/auto-802249/id_rsa Username:docker}
	I1217 00:43:54.812673  307526 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:43:54.812696  307526 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:43:54.812753  307526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-802249
	I1217 00:43:54.837149  307526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/auto-802249/id_rsa Username:docker}
	I1217 00:43:54.844552  307526 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 00:43:54.903756  307526 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:43:54.922070  307526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:43:54.948057  307526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:43:55.028215  307526 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1217 00:43:55.029798  307526 node_ready.go:35] waiting up to 15m0s for node "auto-802249" to be "Ready" ...
	I1217 00:43:55.214813  307526 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 00:43:53.795315  313838 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-802249 --name kindnet-802249 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-802249 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-802249 --network kindnet-802249 --ip 192.168.103.2 --volume kindnet-802249:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 00:43:54.089737  313838 cli_runner.go:164] Run: docker container inspect kindnet-802249 --format={{.State.Running}}
	I1217 00:43:54.109743  313838 cli_runner.go:164] Run: docker container inspect kindnet-802249 --format={{.State.Status}}
	I1217 00:43:54.128234  313838 cli_runner.go:164] Run: docker exec kindnet-802249 stat /var/lib/dpkg/alternatives/iptables
	I1217 00:43:54.175722  313838 oci.go:144] the created container "kindnet-802249" has a running status.
	I1217 00:43:54.175761  313838 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/kindnet-802249/id_rsa...
	I1217 00:43:54.389865  313838 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-12816/.minikube/machines/kindnet-802249/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 00:43:54.425293  313838 cli_runner.go:164] Run: docker container inspect kindnet-802249 --format={{.State.Status}}
	I1217 00:43:54.447980  313838 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 00:43:54.448028  313838 kic_runner.go:114] Args: [docker exec --privileged kindnet-802249 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 00:43:54.494770  313838 cli_runner.go:164] Run: docker container inspect kindnet-802249 --format={{.State.Status}}
	I1217 00:43:54.514773  313838 machine.go:94] provisionDockerMachine start ...
	I1217 00:43:54.514890  313838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-802249
	I1217 00:43:54.534052  313838 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:54.534346  313838 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1217 00:43:54.534363  313838 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:43:54.669837  313838 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-802249
	
	I1217 00:43:54.669877  313838 ubuntu.go:182] provisioning hostname "kindnet-802249"
	I1217 00:43:54.669940  313838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-802249
	I1217 00:43:54.694685  313838 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:54.695022  313838 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1217 00:43:54.695046  313838 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-802249 && echo "kindnet-802249" | sudo tee /etc/hostname
	I1217 00:43:54.850108  313838 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-802249
	
	I1217 00:43:54.850196  313838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-802249
	I1217 00:43:54.875628  313838 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:54.875936  313838 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1217 00:43:54.876020  313838 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-802249' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-802249/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-802249' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:43:55.012188  313838 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:43:55.012274  313838 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:43:55.012313  313838 ubuntu.go:190] setting up certificates
	I1217 00:43:55.012326  313838 provision.go:84] configureAuth start
	I1217 00:43:55.012399  313838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-802249
	I1217 00:43:55.034482  313838 provision.go:143] copyHostCerts
	I1217 00:43:55.034554  313838 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:43:55.034567  313838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:43:55.034650  313838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:43:55.034772  313838 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:43:55.034786  313838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:43:55.034834  313838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:43:55.035054  313838 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:43:55.035070  313838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:43:55.035120  313838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:43:55.035206  313838 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.kindnet-802249 san=[127.0.0.1 192.168.103.2 kindnet-802249 localhost minikube]
	I1217 00:43:55.117656  313838 provision.go:177] copyRemoteCerts
	I1217 00:43:55.117712  313838 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:43:55.117746  313838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-802249
	I1217 00:43:55.137183  313838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kindnet-802249/id_rsa Username:docker}
	I1217 00:43:55.234028  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:43:55.253417  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1217 00:43:55.270547  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 00:43:55.287665  313838 provision.go:87] duration metric: took 275.319952ms to configureAuth
	I1217 00:43:55.287692  313838 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:43:55.287875  313838 config.go:182] Loaded profile config "kindnet-802249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:55.287987  313838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-802249
	I1217 00:43:55.305860  313838 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:55.306170  313838 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1217 00:43:55.306189  313838 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:43:55.584605  313838 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:43:55.584630  313838 machine.go:97] duration metric: took 1.069834466s to provisionDockerMachine
	I1217 00:43:55.584643  313838 client.go:176] duration metric: took 6.609524374s to LocalClient.Create
	I1217 00:43:55.584660  313838 start.go:167] duration metric: took 6.609587903s to libmachine.API.Create "kindnet-802249"
	I1217 00:43:55.584668  313838 start.go:293] postStartSetup for "kindnet-802249" (driver="docker")
	I1217 00:43:55.584678  313838 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:43:55.584740  313838 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:43:55.584788  313838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-802249
	I1217 00:43:55.604929  313838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kindnet-802249/id_rsa Username:docker}
	I1217 00:43:55.700651  313838 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:43:55.704173  313838 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:43:55.704203  313838 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:43:55.704215  313838 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:43:55.704280  313838 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:43:55.704394  313838 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:43:55.704523  313838 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:43:55.712405  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:43:55.732675  313838 start.go:296] duration metric: took 147.993653ms for postStartSetup
	I1217 00:43:55.733034  313838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-802249
	I1217 00:43:55.753293  313838 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/config.json ...
	I1217 00:43:55.753544  313838 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:43:55.753589  313838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-802249
	I1217 00:43:55.772113  313838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kindnet-802249/id_rsa Username:docker}
	I1217 00:43:55.865735  313838 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:43:55.870560  313838 start.go:128] duration metric: took 6.89757341s to createHost
	I1217 00:43:55.870586  313838 start.go:83] releasing machines lock for "kindnet-802249", held for 6.897698079s
	I1217 00:43:55.870652  313838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-802249
	I1217 00:43:55.888931  313838 ssh_runner.go:195] Run: cat /version.json
	I1217 00:43:55.888974  313838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-802249
	I1217 00:43:55.889048  313838 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:43:55.889137  313838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-802249
	I1217 00:43:55.907766  313838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kindnet-802249/id_rsa Username:docker}
	I1217 00:43:55.908236  313838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kindnet-802249/id_rsa Username:docker}
	I1217 00:43:56.063118  313838 ssh_runner.go:195] Run: systemctl --version
	I1217 00:43:56.071048  313838 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:43:56.109942  313838 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:43:56.114629  313838 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:43:56.114685  313838 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:43:56.140879  313838 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 00:43:56.140901  313838 start.go:496] detecting cgroup driver to use...
	I1217 00:43:56.140931  313838 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:43:56.140975  313838 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:43:56.156326  313838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:43:56.168258  313838 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:43:56.168311  313838 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:43:56.184103  313838 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:43:56.200566  313838 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:43:56.287131  313838 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:43:56.381643  313838 docker.go:234] disabling docker service ...
	I1217 00:43:56.381698  313838 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:43:56.399619  313838 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:43:56.411910  313838 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:43:56.499371  313838 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:43:56.579754  313838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:43:56.591686  313838 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:43:56.605714  313838 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:43:56.605767  313838 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:56.615899  313838 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:43:56.615959  313838 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:56.624522  313838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:56.633018  313838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:56.641645  313838 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:43:56.649429  313838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:56.657893  313838 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:56.670703  313838 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:56.678946  313838 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:43:56.685849  313838 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:43:56.692753  313838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:56.773293  313838 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:43:56.913529  313838 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:43:56.913585  313838 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:43:56.917759  313838 start.go:564] Will wait 60s for crictl version
	I1217 00:43:56.917822  313838 ssh_runner.go:195] Run: which crictl
	I1217 00:43:56.921519  313838 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:43:56.946851  313838 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:43:56.946926  313838 ssh_runner.go:195] Run: crio --version
	I1217 00:43:56.975110  313838 ssh_runner.go:195] Run: crio --version
	I1217 00:43:57.003458  313838 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 00:43:55.215824  307526 addons.go:530] duration metric: took 464.062441ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 00:43:55.533201  307526 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-802249" context rescaled to 1 replicas
	W1217 00:43:57.033455  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	W1217 00:43:55.368021  301437 pod_ready.go:104] pod "coredns-66bc5c9577-vtspd" is not "Ready", error: <nil>
	W1217 00:43:57.867499  301437 pod_ready.go:104] pod "coredns-66bc5c9577-vtspd" is not "Ready", error: <nil>
	I1217 00:43:57.004572  313838 cli_runner.go:164] Run: docker network inspect kindnet-802249 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:43:57.022197  313838 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 00:43:57.026203  313838 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:43:57.037089  313838 kubeadm.go:884] updating cluster {Name:kindnet-802249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-802249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:43:57.037199  313838 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:43:57.037253  313838 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:43:57.066773  313838 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:43:57.066792  313838 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:43:57.066829  313838 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:43:57.093098  313838 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:43:57.093118  313838 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:43:57.093125  313838 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1217 00:43:57.093198  313838 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-802249 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-802249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1217 00:43:57.093256  313838 ssh_runner.go:195] Run: crio config
	I1217 00:43:57.138285  313838 cni.go:84] Creating CNI manager for "kindnet"
	I1217 00:43:57.138307  313838 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:43:57.138327  313838 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-802249 NodeName:kindnet-802249 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:43:57.138484  313838 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-802249"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:43:57.138554  313838 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 00:43:57.147488  313838 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:43:57.147546  313838 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:43:57.155310  313838 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I1217 00:43:57.168060  313838 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 00:43:57.182762  313838 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1217 00:43:57.194703  313838 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:43:57.198279  313838 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:43:57.208105  313838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:57.295736  313838 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:43:57.317230  313838 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249 for IP: 192.168.103.2
	I1217 00:43:57.317247  313838 certs.go:195] generating shared ca certs ...
	I1217 00:43:57.317262  313838 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:57.317399  313838 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:43:57.317443  313838 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:43:57.317454  313838 certs.go:257] generating profile certs ...
	I1217 00:43:57.317502  313838 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/client.key
	I1217 00:43:57.317520  313838 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/client.crt with IP's: []
	I1217 00:43:57.385797  313838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/client.crt ...
	I1217 00:43:57.385821  313838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/client.crt: {Name:mk92cfd9d4891400b003067e68b73bcb09e793e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:57.385974  313838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/client.key ...
	I1217 00:43:57.385985  313838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/client.key: {Name:mk2ffb7563fe2e2f01507fc0ee4dd7a5f8f6e92f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:57.386090  313838 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.key.57febc45
	I1217 00:43:57.386105  313838 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.crt.57febc45 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1217 00:43:57.510696  313838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.crt.57febc45 ...
	I1217 00:43:57.510719  313838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.crt.57febc45: {Name:mke771a4c891a72cb294df456787840407961416 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:57.510870  313838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.key.57febc45 ...
	I1217 00:43:57.510883  313838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.key.57febc45: {Name:mkdbea0af7cef26436635d9259f16a4be906b200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:57.510951  313838 certs.go:382] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.crt.57febc45 -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.crt
	I1217 00:43:57.511044  313838 certs.go:386] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.key.57febc45 -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.key
	I1217 00:43:57.511105  313838 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/proxy-client.key
	I1217 00:43:57.511119  313838 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/proxy-client.crt with IP's: []
	I1217 00:43:57.545154  313838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/proxy-client.crt ...
	I1217 00:43:57.545178  313838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/proxy-client.crt: {Name:mk6a57b3928a5e73a2b3cac1ff5564f5240dfb5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:57.545329  313838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/proxy-client.key ...
	I1217 00:43:57.545345  313838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/proxy-client.key: {Name:mk8cbf678d5d2979744f9ad4c4aed21830c25c1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:57.545533  313838 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:43:57.545575  313838 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:43:57.545587  313838 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:43:57.545622  313838 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:43:57.545652  313838 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:43:57.545679  313838 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:43:57.545725  313838 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:43:57.546443  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:43:57.564820  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:43:57.582885  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:43:57.600323  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:43:57.617632  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 00:43:57.634694  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:43:57.651197  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:43:57.668362  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:43:57.684924  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:43:57.703858  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:43:57.720906  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:43:57.738178  313838 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:43:57.752021  313838 ssh_runner.go:195] Run: openssl version
	I1217 00:43:57.758718  313838 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16354.pem
	I1217 00:43:57.766189  313838 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16354.pem /etc/ssl/certs/16354.pem
	I1217 00:43:57.773438  313838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16354.pem
	I1217 00:43:57.777098  313838 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:13 /usr/share/ca-certificates/16354.pem
	I1217 00:43:57.777144  313838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16354.pem
	I1217 00:43:57.813283  313838 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:43:57.821198  313838 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16354.pem /etc/ssl/certs/51391683.0
	I1217 00:43:57.828616  313838 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:43:57.835921  313838 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:43:57.843347  313838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:43:57.847096  313838 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:43:57.847144  313838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	I1217 00:43:57.884282  313838 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:43:57.891543  313838 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/163542.pem /etc/ssl/certs/3ec20f2e.0
	I1217 00:43:57.898751  313838 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:57.906584  313838 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:43:57.913530  313838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:57.917541  313838 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:57.917590  313838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:57.952623  313838 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:43:57.960031  313838 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 00:43:57.967094  313838 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:43:57.970510  313838 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 00:43:57.970559  313838 kubeadm.go:401] StartCluster: {Name:kindnet-802249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-802249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:43:57.970619  313838 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:43:57.970665  313838 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:43:57.997411  313838 cri.go:89] found id: ""
	I1217 00:43:57.997467  313838 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:43:58.005542  313838 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:43:58.013743  313838 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:43:58.013822  313838 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:43:58.022216  313838 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:43:58.022257  313838 kubeadm.go:158] found existing configuration files:
	
	I1217 00:43:58.022304  313838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 00:43:58.030037  313838 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:43:58.030082  313838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:43:58.038143  313838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 00:43:58.046709  313838 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:43:58.046758  313838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:43:58.055580  313838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 00:43:58.064836  313838 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:43:58.064883  313838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:43:58.072259  313838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 00:43:58.079615  313838 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:43:58.079665  313838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:43:58.086672  313838 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:43:58.123646  313838 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1217 00:43:58.123708  313838 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:43:58.143710  313838 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:43:58.143791  313838 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 00:43:58.143850  313838 kubeadm.go:319] OS: Linux
	I1217 00:43:58.143937  313838 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:43:58.144008  313838 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:43:58.144084  313838 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:43:58.144124  313838 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:43:58.144192  313838 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:43:58.144253  313838 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:43:58.144341  313838 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:43:58.144419  313838 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 00:43:58.199791  313838 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:43:58.199983  313838 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:43:58.200137  313838 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:43:58.208178  313838 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:43:58.210887  313838 out.go:252]   - Generating certificates and keys ...
	I1217 00:43:58.211008  313838 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:43:58.211099  313838 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:43:58.585765  313838 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	W1217 00:43:56.249954  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	W1217 00:43:58.749379  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	I1217 00:43:58.868066  301437 pod_ready.go:94] pod "coredns-66bc5c9577-vtspd" is "Ready"
	I1217 00:43:58.868088  301437 pod_ready.go:86] duration metric: took 35.505656374s for pod "coredns-66bc5c9577-vtspd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:43:58.870625  301437 pod_ready.go:83] waiting for pod "etcd-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:43:58.874003  301437 pod_ready.go:94] pod "etcd-embed-certs-153232" is "Ready"
	I1217 00:43:58.874019  301437 pod_ready.go:86] duration metric: took 3.374352ms for pod "etcd-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:43:58.875921  301437 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:43:58.879161  301437 pod_ready.go:94] pod "kube-apiserver-embed-certs-153232" is "Ready"
	I1217 00:43:58.879179  301437 pod_ready.go:86] duration metric: took 3.241989ms for pod "kube-apiserver-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:43:58.880975  301437 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:43:59.066631  301437 pod_ready.go:94] pod "kube-controller-manager-embed-certs-153232" is "Ready"
	I1217 00:43:59.066655  301437 pod_ready.go:86] duration metric: took 185.632998ms for pod "kube-controller-manager-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:43:59.266673  301437 pod_ready.go:83] waiting for pod "kube-proxy-82b8k" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:43:59.665791  301437 pod_ready.go:94] pod "kube-proxy-82b8k" is "Ready"
	I1217 00:43:59.665819  301437 pod_ready.go:86] duration metric: took 399.116964ms for pod "kube-proxy-82b8k" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:43:59.866885  301437 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:00.266138  301437 pod_ready.go:94] pod "kube-scheduler-embed-certs-153232" is "Ready"
	I1217 00:44:00.266171  301437 pod_ready.go:86] duration metric: took 399.26082ms for pod "kube-scheduler-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:00.266188  301437 pod_ready.go:40] duration metric: took 36.907957575s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:44:00.315611  301437 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1217 00:44:00.318195  301437 out.go:179] * Done! kubectl is now configured to use "embed-certs-153232" cluster and "default" namespace by default
	I1217 00:43:58.864406  313838 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 00:43:58.940351  313838 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 00:43:59.053118  313838 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 00:43:59.147896  313838 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 00:43:59.148091  313838 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-802249 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1217 00:43:59.522555  313838 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 00:43:59.522728  313838 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-802249 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1217 00:43:59.592047  313838 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 00:44:00.219192  313838 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 00:44:00.400112  313838 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 00:44:00.400255  313838 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:44:00.691745  313838 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:44:00.833794  313838 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:44:00.905110  313838 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:44:01.053550  313838 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:44:01.242847  313838 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:44:01.243519  313838 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:44:01.247610  313838 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1217 00:43:59.532905  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	W1217 00:44:02.032674  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	I1217 00:44:01.248940  313838 out.go:252]   - Booting up control plane ...
	I1217 00:44:01.249105  313838 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:44:01.249226  313838 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:44:01.249948  313838 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:44:01.264083  313838 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:44:01.264175  313838 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:44:01.270691  313838 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:44:01.271068  313838 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:44:01.271143  313838 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:44:01.368444  313838 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:44:01.368635  313838 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:44:02.369489  313838 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001755867s
	I1217 00:44:02.372487  313838 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 00:44:02.372606  313838 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1217 00:44:02.372722  313838 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 00:44:02.372852  313838 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 00:44:03.427929  313838 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.052397104s
	W1217 00:44:01.249224  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	W1217 00:44:03.752442  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	I1217 00:44:03.868822  313838 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.496304145s
	I1217 00:44:05.374156  313838 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.001561713s
	I1217 00:44:05.390166  313838 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 00:44:05.399767  313838 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 00:44:05.407445  313838 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 00:44:05.407721  313838 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-802249 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 00:44:05.415604  313838 kubeadm.go:319] [bootstrap-token] Using token: cs49lo.756orz921ne6woru
	I1217 00:44:05.416721  313838 out.go:252]   - Configuring RBAC rules ...
	I1217 00:44:05.416869  313838 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 00:44:05.420549  313838 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 00:44:05.425523  313838 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 00:44:05.427836  313838 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 00:44:05.430123  313838 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 00:44:05.433148  313838 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 00:44:05.780559  313838 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 00:44:06.194472  313838 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 00:44:06.780539  313838 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 00:44:06.781493  313838 kubeadm.go:319] 
	I1217 00:44:06.781588  313838 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 00:44:06.781600  313838 kubeadm.go:319] 
	I1217 00:44:06.781716  313838 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 00:44:06.781726  313838 kubeadm.go:319] 
	I1217 00:44:06.781756  313838 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 00:44:06.781838  313838 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 00:44:06.781892  313838 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 00:44:06.781897  313838 kubeadm.go:319] 
	I1217 00:44:06.781983  313838 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 00:44:06.782015  313838 kubeadm.go:319] 
	I1217 00:44:06.782067  313838 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 00:44:06.782077  313838 kubeadm.go:319] 
	I1217 00:44:06.782136  313838 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 00:44:06.782198  313838 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 00:44:06.782256  313838 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 00:44:06.782262  313838 kubeadm.go:319] 
	I1217 00:44:06.782368  313838 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 00:44:06.782472  313838 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 00:44:06.782487  313838 kubeadm.go:319] 
	I1217 00:44:06.782598  313838 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token cs49lo.756orz921ne6woru \
	I1217 00:44:06.782724  313838 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a7c34974519aee4953e03245da076d7a2eba06e40135880a85806e2dab303fa1 \
	I1217 00:44:06.782760  313838 kubeadm.go:319] 	--control-plane 
	I1217 00:44:06.782774  313838 kubeadm.go:319] 
	I1217 00:44:06.782881  313838 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 00:44:06.782889  313838 kubeadm.go:319] 
	I1217 00:44:06.783023  313838 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token cs49lo.756orz921ne6woru \
	I1217 00:44:06.783175  313838 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a7c34974519aee4953e03245da076d7a2eba06e40135880a85806e2dab303fa1 
	I1217 00:44:06.786022  313838 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 00:44:06.786111  313838 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:44:06.786132  313838 cni.go:84] Creating CNI manager for "kindnet"
	I1217 00:44:06.787727  313838 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1217 00:44:04.033110  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	W1217 00:44:06.033595  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	I1217 00:44:06.789157  313838 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 00:44:06.793369  313838 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1217 00:44:06.793384  313838 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1217 00:44:06.807960  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 00:44:07.015406  313838 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 00:44:07.015540  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-802249 minikube.k8s.io/updated_at=2025_12_17T00_44_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1 minikube.k8s.io/name=kindnet-802249 minikube.k8s.io/primary=true
	I1217 00:44:07.015561  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:44:07.026153  313838 ops.go:34] apiserver oom_adj: -16
	I1217 00:44:07.095769  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:44:07.596762  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:44:08.096600  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:44:08.596371  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1217 00:44:06.248953  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	W1217 00:44:08.249591  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	I1217 00:44:09.095823  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:44:09.596406  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:44:10.095948  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:44:10.596695  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:44:11.095801  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:44:11.596159  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:44:12.096735  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:44:12.171026  313838 kubeadm.go:1114] duration metric: took 5.155536062s to wait for elevateKubeSystemPrivileges
	I1217 00:44:12.171066  313838 kubeadm.go:403] duration metric: took 14.200507734s to StartCluster
	I1217 00:44:12.171088  313838 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:44:12.171157  313838 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:44:12.172967  313838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:44:12.173234  313838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 00:44:12.173250  313838 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:44:12.173233  313838 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:44:12.173327  313838 addons.go:70] Setting default-storageclass=true in profile "kindnet-802249"
	I1217 00:44:12.173344  313838 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-802249"
	I1217 00:44:12.173415  313838 config.go:182] Loaded profile config "kindnet-802249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:44:12.173321  313838 addons.go:70] Setting storage-provisioner=true in profile "kindnet-802249"
	I1217 00:44:12.173452  313838 addons.go:239] Setting addon storage-provisioner=true in "kindnet-802249"
	I1217 00:44:12.173497  313838 host.go:66] Checking if "kindnet-802249" exists ...
	I1217 00:44:12.173901  313838 cli_runner.go:164] Run: docker container inspect kindnet-802249 --format={{.State.Status}}
	I1217 00:44:12.174119  313838 cli_runner.go:164] Run: docker container inspect kindnet-802249 --format={{.State.Status}}
	I1217 00:44:12.175581  313838 out.go:179] * Verifying Kubernetes components...
	I1217 00:44:12.177061  313838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:44:12.196411  313838 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:44:12.196900  313838 addons.go:239] Setting addon default-storageclass=true in "kindnet-802249"
	I1217 00:44:12.196943  313838 host.go:66] Checking if "kindnet-802249" exists ...
	I1217 00:44:12.197440  313838 cli_runner.go:164] Run: docker container inspect kindnet-802249 --format={{.State.Status}}
	I1217 00:44:12.197519  313838 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:44:12.197539  313838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:44:12.197588  313838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-802249
	I1217 00:44:12.225604  313838 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:44:12.225629  313838 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:44:12.225672  313838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-802249
	I1217 00:44:12.225803  313838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kindnet-802249/id_rsa Username:docker}
	I1217 00:44:12.254222  313838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kindnet-802249/id_rsa Username:docker}
	I1217 00:44:12.275749  313838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 00:44:12.345522  313838 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:44:12.350380  313838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:44:12.372208  313838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:44:12.505134  313838 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1217 00:44:12.506852  313838 node_ready.go:35] waiting up to 15m0s for node "kindnet-802249" to be "Ready" ...
	I1217 00:44:12.682077  313838 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1217 00:44:08.532510  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	W1217 00:44:10.532916  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	W1217 00:44:12.534707  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	I1217 00:44:12.683060  313838 addons.go:530] duration metric: took 509.805357ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 00:44:13.009768  313838 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-802249" context rescaled to 1 replicas
	
	
	==> CRI-O <==
	Dec 17 00:43:33 embed-certs-153232 crio[568]: time="2025-12-17T00:43:33.313677421Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 00:43:33 embed-certs-153232 crio[568]: time="2025-12-17T00:43:33.317636191Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 00:43:33 embed-certs-153232 crio[568]: time="2025-12-17T00:43:33.317656109Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 00:43:47 embed-certs-153232 crio[568]: time="2025-12-17T00:43:47.565885589Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6f22add4-ed3c-4ad6-86b9-6c89bb9b94e3 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:47 embed-certs-153232 crio[568]: time="2025-12-17T00:43:47.569936688Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7e024577-c8f1-4791-94fa-d1c70a185b93 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:47 embed-certs-153232 crio[568]: time="2025-12-17T00:43:47.573460726Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9pfwm/dashboard-metrics-scraper" id=c0b102d8-31a6-48e3-9b32-d789aacbc4ee name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:47 embed-certs-153232 crio[568]: time="2025-12-17T00:43:47.573599443Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:47 embed-certs-153232 crio[568]: time="2025-12-17T00:43:47.582458096Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:47 embed-certs-153232 crio[568]: time="2025-12-17T00:43:47.583127918Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:47 embed-certs-153232 crio[568]: time="2025-12-17T00:43:47.628031625Z" level=info msg="Created container 3d2c3aa6013510ed343b70dda91e1024e94192c440d8cb7aa743b80510c1917f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9pfwm/dashboard-metrics-scraper" id=c0b102d8-31a6-48e3-9b32-d789aacbc4ee name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:47 embed-certs-153232 crio[568]: time="2025-12-17T00:43:47.628719425Z" level=info msg="Starting container: 3d2c3aa6013510ed343b70dda91e1024e94192c440d8cb7aa743b80510c1917f" id=14e737b1-f6cf-4071-ba6f-a103928ff2eb name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:43:47 embed-certs-153232 crio[568]: time="2025-12-17T00:43:47.631442448Z" level=info msg="Started container" PID=1763 containerID=3d2c3aa6013510ed343b70dda91e1024e94192c440d8cb7aa743b80510c1917f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9pfwm/dashboard-metrics-scraper id=14e737b1-f6cf-4071-ba6f-a103928ff2eb name=/runtime.v1.RuntimeService/StartContainer sandboxID=8898d3657ffc648c2e23a2e3c84ba91090a624613ccfa7e399701cc6657c0761
	Dec 17 00:43:47 embed-certs-153232 crio[568]: time="2025-12-17T00:43:47.679758586Z" level=info msg="Removing container: a31ea9167311a0aaaa4fc8157542f43c53c0b4488e6d8118dc1dd2dee64b8e0c" id=8f462a94-0297-43d2-9a5b-583a3569ec98 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:43:47 embed-certs-153232 crio[568]: time="2025-12-17T00:43:47.693972935Z" level=info msg="Removed container a31ea9167311a0aaaa4fc8157542f43c53c0b4488e6d8118dc1dd2dee64b8e0c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9pfwm/dashboard-metrics-scraper" id=8f462a94-0297-43d2-9a5b-583a3569ec98 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:43:53 embed-certs-153232 crio[568]: time="2025-12-17T00:43:53.697691571Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c44c8f59-4acf-424f-afd1-f6adb9a8c014 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:53 embed-certs-153232 crio[568]: time="2025-12-17T00:43:53.700050041Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4775b348-5537-4b3c-8ca7-be0fd9c69944 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:53 embed-certs-153232 crio[568]: time="2025-12-17T00:43:53.706481838Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b22d5b04-b044-4d96-b873-fc91a656925d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:53 embed-certs-153232 crio[568]: time="2025-12-17T00:43:53.706618997Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:53 embed-certs-153232 crio[568]: time="2025-12-17T00:43:53.714376304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:53 embed-certs-153232 crio[568]: time="2025-12-17T00:43:53.714579822Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8b505f63b6dc7321183c52d8c3a8c90aa316d54ac6680e558659344294668f83/merged/etc/passwd: no such file or directory"
	Dec 17 00:43:53 embed-certs-153232 crio[568]: time="2025-12-17T00:43:53.714621469Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8b505f63b6dc7321183c52d8c3a8c90aa316d54ac6680e558659344294668f83/merged/etc/group: no such file or directory"
	Dec 17 00:43:53 embed-certs-153232 crio[568]: time="2025-12-17T00:43:53.714937129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:53 embed-certs-153232 crio[568]: time="2025-12-17T00:43:53.754462944Z" level=info msg="Created container 4aa28ef7b86e0ac2c8860e0731143889f5585d08d1c8e3092e5fdbae502d7645: kube-system/storage-provisioner/storage-provisioner" id=b22d5b04-b044-4d96-b873-fc91a656925d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:53 embed-certs-153232 crio[568]: time="2025-12-17T00:43:53.755507607Z" level=info msg="Starting container: 4aa28ef7b86e0ac2c8860e0731143889f5585d08d1c8e3092e5fdbae502d7645" id=27d8dc3d-3d31-4460-be6e-0a0bf3e535d4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:43:53 embed-certs-153232 crio[568]: time="2025-12-17T00:43:53.75803986Z" level=info msg="Started container" PID=1777 containerID=4aa28ef7b86e0ac2c8860e0731143889f5585d08d1c8e3092e5fdbae502d7645 description=kube-system/storage-provisioner/storage-provisioner id=27d8dc3d-3d31-4460-be6e-0a0bf3e535d4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=615b17a22f66a573db0f49677f28f795a1d05848ced32f6e454f6af3018ae915
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	4aa28ef7b86e0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   615b17a22f66a       storage-provisioner                          kube-system
	3d2c3aa601351       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago      Exited              dashboard-metrics-scraper   2                   8898d3657ffc6       dashboard-metrics-scraper-6ffb444bf9-9pfwm   kubernetes-dashboard
	d4b900c582c6a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   89f3d9a3a354d       kubernetes-dashboard-855c9754f9-472j2        kubernetes-dashboard
	932d916c8f226       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   6dbab6d186c3c       coredns-66bc5c9577-vtspd                     kube-system
	e8fd5e53eb9ce       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   169be801cfb10       busybox                                      default
	9e12fba8024ab       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   615b17a22f66a       storage-provisioner                          kube-system
	e8ac4e7470f94       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           51 seconds ago      Running             kube-proxy                  0                   d3a75bef2497b       kube-proxy-82b8k                             kube-system
	6301e99f54ccb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   714f068ea711e       kindnet-zffzt                                kube-system
	dadde2213b8a8       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           54 seconds ago      Running             kube-controller-manager     0                   b79095ab23e17       kube-controller-manager-embed-certs-153232   kube-system
	117e1e782a798       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           54 seconds ago      Running             etcd                        0                   c3a0485a6ea40       etcd-embed-certs-153232                      kube-system
	f3a000d40d6d7       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           54 seconds ago      Running             kube-scheduler              0                   a974f7cb0be76       kube-scheduler-embed-certs-153232            kube-system
	a770bc08061f9       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           54 seconds ago      Running             kube-apiserver              0                   51b381b17bdcf       kube-apiserver-embed-certs-153232            kube-system
	
	
	==> coredns [932d916c8f226125fbf4338249dcdb35a5f6d7adf40a1fb61934237d9cba3980] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42590 - 1415 "HINFO IN 902495801244066443.6793965876226482938. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.059098168s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-153232
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-153232
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=embed-certs-153232
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_42_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:42:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-153232
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:44:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:43:52 +0000   Wed, 17 Dec 2025 00:42:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:43:52 +0000   Wed, 17 Dec 2025 00:42:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:43:52 +0000   Wed, 17 Dec 2025 00:42:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 00:43:52 +0000   Wed, 17 Dec 2025 00:42:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-153232
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                5d400583-a23e-4e06-8ba1-0a6ece90e0c3
	  Boot ID:                    0e9cedc6-c46e-4354-b3d2-9272a8b33ae5
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-vtspd                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-embed-certs-153232                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-zffzt                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-embed-certs-153232             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-embed-certs-153232    200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-82b8k                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-embed-certs-153232             100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-9pfwm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-472j2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeHasSufficientMemory  110s               kubelet          Node embed-certs-153232 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s               kubelet          Node embed-certs-153232 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s               kubelet          Node embed-certs-153232 status is now: NodeHasSufficientPID
	  Normal  Starting                 110s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s               node-controller  Node embed-certs-153232 event: Registered Node embed-certs-153232 in Controller
	  Normal  NodeReady                93s                kubelet          Node embed-certs-153232 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node embed-certs-153232 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node embed-certs-153232 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)  kubelet          Node embed-certs-153232 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node embed-certs-153232 event: Registered Node embed-certs-153232 in Controller
	
	
	==> dmesg <==
	[  +0.089382] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024236] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.864694] kauditd_printk_skb: 47 callbacks suppressed
	[Dec17 00:07] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.006904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +2.048755] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +4.030595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +8.447143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[ +16.382404] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000015] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[Dec17 00:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	
	
	==> etcd [117e1e782a79833091ca7f1a9da4be915158517d3d54c5674f3b4e0875f18cce] <==
	{"level":"warn","ts":"2025-12-17T00:43:21.052667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.060746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.072093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.076654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.085468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.094387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.102423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.113716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.122246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.130458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.139078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.146106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.154292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.162306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.171706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.183837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.189870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.197272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.205803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.213429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.234824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.242650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.249470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.303800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47620","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T00:43:52.667139Z","caller":"traceutil/trace.go:172","msg":"trace[890311236] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"117.46062ms","start":"2025-12-17T00:43:52.549660Z","end":"2025-12-17T00:43:52.667121Z","steps":["trace[890311236] 'process raft request'  (duration: 117.325339ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:44:14 up  1:26,  0 user,  load average: 3.15, 2.91, 2.02
	Linux embed-certs-153232 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6301e99f54ccbfcaa7a5dde58d324c165f0fe60d9d03ed0b9fa97c55700ac344] <==
	I1217 00:43:23.085153       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 00:43:23.085408       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1217 00:43:23.085555       1 main.go:148] setting mtu 1500 for CNI 
	I1217 00:43:23.085570       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 00:43:23.085590       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T00:43:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 00:43:23.378831       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 00:43:23.378863       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 00:43:23.378884       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 00:43:23.478257       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 00:43:23.747546       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 00:43:23.747580       1 metrics.go:72] Registering metrics
	I1217 00:43:23.747658       1 controller.go:711] "Syncing nftables rules"
	I1217 00:43:33.288730       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 00:43:33.288801       1 main.go:301] handling current node
	I1217 00:43:43.290099       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 00:43:43.290133       1 main.go:301] handling current node
	I1217 00:43:53.287868       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 00:43:53.287907       1 main.go:301] handling current node
	I1217 00:44:03.291465       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 00:44:03.291509       1 main.go:301] handling current node
	I1217 00:44:13.296130       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 00:44:13.296165       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a770bc08061f975f567cb7fb7cec6883ec6d5215d19863d7ddb2cc0049571d8b] <==
	I1217 00:43:21.789725       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 00:43:21.789738       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 00:43:21.790423       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 00:43:21.790436       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 00:43:21.790468       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 00:43:21.790579       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 00:43:21.800080       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 00:43:21.811283       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 00:43:21.811412       1 aggregator.go:171] initial CRD sync complete...
	I1217 00:43:21.811446       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 00:43:21.811471       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 00:43:21.811494       1 cache.go:39] Caches are synced for autoregister controller
	I1217 00:43:21.823858       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 00:43:21.843663       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 00:43:22.087388       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 00:43:22.116178       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 00:43:22.134397       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 00:43:22.150264       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 00:43:22.156390       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 00:43:22.187916       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.84.109"}
	I1217 00:43:22.196745       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.234.0"}
	I1217 00:43:22.692814       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 00:43:25.177916       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 00:43:25.523903       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 00:43:25.672837       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [dadde2213b8a894873343cf42602c1bedb001a3311bd9672a69d0fa4a07d9786] <==
	I1217 00:43:25.121252       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 00:43:25.121983       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1217 00:43:25.125986       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1217 00:43:25.127109       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1217 00:43:25.127177       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1217 00:43:25.127213       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 00:43:25.127217       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 00:43:25.127221       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 00:43:25.128248       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 00:43:25.128306       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 00:43:25.129487       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 00:43:25.129601       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 00:43:25.131877       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1217 00:43:25.133042       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 00:43:25.135314       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 00:43:25.135376       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 00:43:25.137520       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 00:43:25.138769       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 00:43:25.141097       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 00:43:25.143331       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 00:43:25.145606       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1217 00:43:25.145730       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1217 00:43:25.145817       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-153232"
	I1217 00:43:25.145890       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1217 00:43:25.157532       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [e8ac4e7470f9424e1e7541237e9c9cdc16aa75232ea66c1cdc71939466c64b0d] <==
	I1217 00:43:22.982655       1 server_linux.go:53] "Using iptables proxy"
	I1217 00:43:23.041714       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 00:43:23.142508       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 00:43:23.142547       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1217 00:43:23.142615       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 00:43:23.161300       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 00:43:23.161356       1 server_linux.go:132] "Using iptables Proxier"
	I1217 00:43:23.166342       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 00:43:23.166836       1 server.go:527] "Version info" version="v1.34.2"
	I1217 00:43:23.166874       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:43:23.168439       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 00:43:23.169088       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 00:43:23.169117       1 config.go:309] "Starting node config controller"
	I1217 00:43:23.169146       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 00:43:23.169156       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 00:43:23.168449       1 config.go:200] "Starting service config controller"
	I1217 00:43:23.169279       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 00:43:23.170980       1 config.go:106] "Starting endpoint slice config controller"
	I1217 00:43:23.171017       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 00:43:23.269350       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 00:43:23.270456       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 00:43:23.271667       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f3a000d40d6d7ebc54a27ecd08dc5aa3b530c6e66b7327ec3ec09941fca5d2ce] <==
	I1217 00:43:21.497943       1 serving.go:386] Generated self-signed cert in-memory
	I1217 00:43:22.129848       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1217 00:43:22.129888       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:43:22.134584       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1217 00:43:22.134629       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1217 00:43:22.134676       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:43:22.135171       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:43:22.134711       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 00:43:22.135049       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 00:43:22.135233       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 00:43:22.135066       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 00:43:22.235245       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1217 00:43:22.235252       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:43:22.237321       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Dec 17 00:43:25 embed-certs-153232 kubelet[733]: I1217 00:43:25.675375     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9f5811f0-bf00-4b4b-a326-a1e04c616776-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-472j2\" (UID: \"9f5811f0-bf00-4b4b-a326-a1e04c616776\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-472j2"
	Dec 17 00:43:25 embed-certs-153232 kubelet[733]: I1217 00:43:25.675404     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q676m\" (UniqueName: \"kubernetes.io/projected/9f5811f0-bf00-4b4b-a326-a1e04c616776-kube-api-access-q676m\") pod \"kubernetes-dashboard-855c9754f9-472j2\" (UID: \"9f5811f0-bf00-4b4b-a326-a1e04c616776\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-472j2"
	Dec 17 00:43:28 embed-certs-153232 kubelet[733]: I1217 00:43:28.613821     733 scope.go:117] "RemoveContainer" containerID="cc772cfb311a9881bcbe4f6ed1033793fa717f2e540bc07449315af49ef193b9"
	Dec 17 00:43:28 embed-certs-153232 kubelet[733]: I1217 00:43:28.747561     733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 17 00:43:29 embed-certs-153232 kubelet[733]: I1217 00:43:29.619076     733 scope.go:117] "RemoveContainer" containerID="cc772cfb311a9881bcbe4f6ed1033793fa717f2e540bc07449315af49ef193b9"
	Dec 17 00:43:29 embed-certs-153232 kubelet[733]: I1217 00:43:29.619443     733 scope.go:117] "RemoveContainer" containerID="a31ea9167311a0aaaa4fc8157542f43c53c0b4488e6d8118dc1dd2dee64b8e0c"
	Dec 17 00:43:29 embed-certs-153232 kubelet[733]: E1217 00:43:29.619624     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9pfwm_kubernetes-dashboard(657354ac-ce6a-4ee6-b133-99fa4afa1442)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9pfwm" podUID="657354ac-ce6a-4ee6-b133-99fa4afa1442"
	Dec 17 00:43:30 embed-certs-153232 kubelet[733]: I1217 00:43:30.623430     733 scope.go:117] "RemoveContainer" containerID="a31ea9167311a0aaaa4fc8157542f43c53c0b4488e6d8118dc1dd2dee64b8e0c"
	Dec 17 00:43:30 embed-certs-153232 kubelet[733]: E1217 00:43:30.623593     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9pfwm_kubernetes-dashboard(657354ac-ce6a-4ee6-b133-99fa4afa1442)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9pfwm" podUID="657354ac-ce6a-4ee6-b133-99fa4afa1442"
	Dec 17 00:43:35 embed-certs-153232 kubelet[733]: I1217 00:43:35.090631     733 scope.go:117] "RemoveContainer" containerID="a31ea9167311a0aaaa4fc8157542f43c53c0b4488e6d8118dc1dd2dee64b8e0c"
	Dec 17 00:43:35 embed-certs-153232 kubelet[733]: E1217 00:43:35.090883     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9pfwm_kubernetes-dashboard(657354ac-ce6a-4ee6-b133-99fa4afa1442)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9pfwm" podUID="657354ac-ce6a-4ee6-b133-99fa4afa1442"
	Dec 17 00:43:36 embed-certs-153232 kubelet[733]: I1217 00:43:36.758184     733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-472j2" podStartSLOduration=4.763643562 podStartE2EDuration="11.758162649s" podCreationTimestamp="2025-12-17 00:43:25 +0000 UTC" firstStartedPulling="2025-12-17 00:43:25.908865846 +0000 UTC m=+6.430578638" lastFinishedPulling="2025-12-17 00:43:32.903384932 +0000 UTC m=+13.425097725" observedRunningTime="2025-12-17 00:43:33.643777783 +0000 UTC m=+14.165490594" watchObservedRunningTime="2025-12-17 00:43:36.758162649 +0000 UTC m=+17.279875457"
	Dec 17 00:43:47 embed-certs-153232 kubelet[733]: I1217 00:43:47.565239     733 scope.go:117] "RemoveContainer" containerID="a31ea9167311a0aaaa4fc8157542f43c53c0b4488e6d8118dc1dd2dee64b8e0c"
	Dec 17 00:43:47 embed-certs-153232 kubelet[733]: I1217 00:43:47.677139     733 scope.go:117] "RemoveContainer" containerID="a31ea9167311a0aaaa4fc8157542f43c53c0b4488e6d8118dc1dd2dee64b8e0c"
	Dec 17 00:43:47 embed-certs-153232 kubelet[733]: I1217 00:43:47.677422     733 scope.go:117] "RemoveContainer" containerID="3d2c3aa6013510ed343b70dda91e1024e94192c440d8cb7aa743b80510c1917f"
	Dec 17 00:43:47 embed-certs-153232 kubelet[733]: E1217 00:43:47.677622     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9pfwm_kubernetes-dashboard(657354ac-ce6a-4ee6-b133-99fa4afa1442)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9pfwm" podUID="657354ac-ce6a-4ee6-b133-99fa4afa1442"
	Dec 17 00:43:53 embed-certs-153232 kubelet[733]: I1217 00:43:53.697207     733 scope.go:117] "RemoveContainer" containerID="9e12fba8024abfa61f00f5fe053cd5d50fccf8f0b0cd949bcff836ef6212ea59"
	Dec 17 00:43:55 embed-certs-153232 kubelet[733]: I1217 00:43:55.091558     733 scope.go:117] "RemoveContainer" containerID="3d2c3aa6013510ed343b70dda91e1024e94192c440d8cb7aa743b80510c1917f"
	Dec 17 00:43:55 embed-certs-153232 kubelet[733]: E1217 00:43:55.091784     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9pfwm_kubernetes-dashboard(657354ac-ce6a-4ee6-b133-99fa4afa1442)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9pfwm" podUID="657354ac-ce6a-4ee6-b133-99fa4afa1442"
	Dec 17 00:44:07 embed-certs-153232 kubelet[733]: I1217 00:44:07.564738     733 scope.go:117] "RemoveContainer" containerID="3d2c3aa6013510ed343b70dda91e1024e94192c440d8cb7aa743b80510c1917f"
	Dec 17 00:44:07 embed-certs-153232 kubelet[733]: E1217 00:44:07.564971     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9pfwm_kubernetes-dashboard(657354ac-ce6a-4ee6-b133-99fa4afa1442)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9pfwm" podUID="657354ac-ce6a-4ee6-b133-99fa4afa1442"
	Dec 17 00:44:12 embed-certs-153232 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 00:44:12 embed-certs-153232 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 00:44:12 embed-certs-153232 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:44:12 embed-certs-153232 systemd[1]: kubelet.service: Consumed 1.632s CPU time.
	
	
	==> kubernetes-dashboard [d4b900c582c6abc6c4d8c623e5365ca20e2f76c0980168c5652e9f834c43de48] <==
	2025/12/17 00:43:33 Using namespace: kubernetes-dashboard
	2025/12/17 00:43:33 Using in-cluster config to connect to apiserver
	2025/12/17 00:43:33 Using secret token for csrf signing
	2025/12/17 00:43:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 00:43:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 00:43:33 Successful initial request to the apiserver, version: v1.34.2
	2025/12/17 00:43:33 Generating JWE encryption key
	2025/12/17 00:43:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 00:43:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 00:43:33 Initializing JWE encryption key from synchronized object
	2025/12/17 00:43:33 Creating in-cluster Sidecar client
	2025/12/17 00:43:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 00:43:33 Serving insecurely on HTTP port: 9090
	2025/12/17 00:44:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 00:43:33 Starting overwatch
	
	
	==> storage-provisioner [4aa28ef7b86e0ac2c8860e0731143889f5585d08d1c8e3092e5fdbae502d7645] <==
	I1217 00:43:53.772586       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 00:43:53.781285       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 00:43:53.781335       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 00:43:53.783356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:57.240074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:01.500216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:05.098541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:08.152030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:11.174316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:11.179161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 00:44:11.179295       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 00:44:11.179358       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"edc5b1f6-fb4f-4962-9502-23926c96ec27", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-153232_1e248d9a-57f1-4723-a4a0-951eb4ec5313 became leader
	I1217 00:44:11.179412       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-153232_1e248d9a-57f1-4723-a4a0-951eb4ec5313!
	W1217 00:44:11.181885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:11.185211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 00:44:11.279589       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-153232_1e248d9a-57f1-4723-a4a0-951eb4ec5313!
	W1217 00:44:13.187766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:13.191558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9e12fba8024abfa61f00f5fe053cd5d50fccf8f0b0cd949bcff836ef6212ea59] <==
	I1217 00:43:22.947627       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 00:43:52.950438       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-153232 -n embed-certs-153232
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-153232 -n embed-certs-153232: exit status 2 (320.833122ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-153232 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-153232
helpers_test.go:244: (dbg) docker inspect embed-certs-153232:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0643874d17495c0c32f8432b12d57b11dd7085dfaf7906608f3a8753637c5a15",
	        "Created": "2025-12-17T00:42:07.386583477Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301693,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:43:13.27345219Z",
	            "FinishedAt": "2025-12-17T00:43:12.344975251Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/0643874d17495c0c32f8432b12d57b11dd7085dfaf7906608f3a8753637c5a15/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0643874d17495c0c32f8432b12d57b11dd7085dfaf7906608f3a8753637c5a15/hostname",
	        "HostsPath": "/var/lib/docker/containers/0643874d17495c0c32f8432b12d57b11dd7085dfaf7906608f3a8753637c5a15/hosts",
	        "LogPath": "/var/lib/docker/containers/0643874d17495c0c32f8432b12d57b11dd7085dfaf7906608f3a8753637c5a15/0643874d17495c0c32f8432b12d57b11dd7085dfaf7906608f3a8753637c5a15-json.log",
	        "Name": "/embed-certs-153232",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-153232:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-153232",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0643874d17495c0c32f8432b12d57b11dd7085dfaf7906608f3a8753637c5a15",
	                "LowerDir": "/var/lib/docker/overlay2/75e64cb888fdc80983d39325faeb17b16c0afd2693d7425dc490c93491959bb6-init/diff:/var/lib/docker/overlay2/594b812fd6d8db89dab322ea9e00d43dd555e9709fb5e6953e3873cce717392c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/75e64cb888fdc80983d39325faeb17b16c0afd2693d7425dc490c93491959bb6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/75e64cb888fdc80983d39325faeb17b16c0afd2693d7425dc490c93491959bb6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/75e64cb888fdc80983d39325faeb17b16c0afd2693d7425dc490c93491959bb6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-153232",
	                "Source": "/var/lib/docker/volumes/embed-certs-153232/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-153232",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-153232",
	                "name.minikube.sigs.k8s.io": "embed-certs-153232",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f2158cc6cbcda501008c4446adb8778717225fc54b69ef09a4e4e5b039a35f2e",
	            "SandboxKey": "/var/run/docker/netns/f2158cc6cbcd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-153232": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a0b8f164bc66d742f404a41cb692119204b3085d963265276bc535b43e9a9723",
	                    "EndpointID": "53af4ffc444007d7d41c8bf0e4ff9ef93828286575e91871e7c05279a13be2ba",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "22:ff:af:62:fd:29",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-153232",
	                        "0643874d1749"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-153232 -n embed-certs-153232
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-153232 -n embed-certs-153232: exit status 2 (314.226658ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-153232 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-153232 logs -n 25: (1.082769014s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p newest-cni-653717 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-153232 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │                     │
	│ stop    │ -p embed-certs-153232 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:42 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable metrics-server -p newest-cni-653717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-414413 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ stop    │ -p newest-cni-653717 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ stop    │ -p default-k8s-diff-port-414413 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable dashboard -p newest-cni-653717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p newest-cni-653717 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-153232 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p embed-certs-153232 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:44 UTC │
	│ image   │ newest-cni-653717 image list --format=json                                                                                                                                                                                                           │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ pause   │ -p newest-cni-653717 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-414413 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p default-k8s-diff-port-414413 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ delete  │ -p newest-cni-653717                                                                                                                                                                                                                                 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ delete  │ -p newest-cni-653717                                                                                                                                                                                                                                 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p auto-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-802249                  │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ image   │ no-preload-864613 image list --format=json                                                                                                                                                                                                           │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ pause   │ -p no-preload-864613 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ delete  │ -p no-preload-864613                                                                                                                                                                                                                                 │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ delete  │ -p no-preload-864613                                                                                                                                                                                                                                 │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p kindnet-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                             │ kindnet-802249               │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ image   │ embed-certs-153232 image list --format=json                                                                                                                                                                                                          │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:44 UTC │ 17 Dec 25 00:44 UTC │
	│ pause   │ -p embed-certs-153232 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:44 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:43:48
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:43:48.769795  313838 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:43:48.770047  313838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:43:48.770055  313838 out.go:374] Setting ErrFile to fd 2...
	I1217 00:43:48.770060  313838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:43:48.770244  313838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:43:48.770680  313838 out.go:368] Setting JSON to false
	I1217 00:43:48.771902  313838 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5179,"bootTime":1765927050,"procs":344,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:43:48.771962  313838 start.go:143] virtualization: kvm guest
	I1217 00:43:48.774031  313838 out.go:179] * [kindnet-802249] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:43:48.775928  313838 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:43:48.775950  313838 notify.go:221] Checking for updates...
	I1217 00:43:48.778239  313838 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:43:48.779636  313838 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:43:48.780730  313838 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:43:48.781754  313838 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:43:48.783004  313838 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:43:48.784614  313838 config.go:182] Loaded profile config "auto-802249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:48.784740  313838 config.go:182] Loaded profile config "default-k8s-diff-port-414413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:48.784875  313838 config.go:182] Loaded profile config "embed-certs-153232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:48.785025  313838 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:43:48.811205  313838 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:43:48.811415  313838 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:43:48.879696  313838 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 00:43:48.868563585 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:43:48.879821  313838 docker.go:319] overlay module found
	I1217 00:43:48.881467  313838 out.go:179] * Using the docker driver based on user configuration
	I1217 00:43:48.882479  313838 start.go:309] selected driver: docker
	I1217 00:43:48.882497  313838 start.go:927] validating driver "docker" against <nil>
	I1217 00:43:48.882510  313838 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:43:48.883253  313838 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:43:48.942567  313838 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 00:43:48.932031348 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:43:48.942777  313838 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:43:48.943106  313838 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:43:48.944723  313838 out.go:179] * Using Docker driver with root privileges
	I1217 00:43:48.945842  313838 cni.go:84] Creating CNI manager for "kindnet"
	I1217 00:43:48.945864  313838 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 00:43:48.945941  313838 start.go:353] cluster config:
	{Name:kindnet-802249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-802249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:43:48.947336  313838 out.go:179] * Starting "kindnet-802249" primary control-plane node in "kindnet-802249" cluster
	I1217 00:43:48.948365  313838 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:43:48.949518  313838 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:43:48.950629  313838 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:43:48.950677  313838 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1217 00:43:48.950691  313838 cache.go:65] Caching tarball of preloaded images
	I1217 00:43:48.950727  313838 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:43:48.950805  313838 preload.go:238] Found /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 00:43:48.950818  313838 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 00:43:48.950936  313838 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/config.json ...
	I1217 00:43:48.950964  313838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/config.json: {Name:mk0bb9291022a4703579df82ea9711e36a66c4f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:48.972665  313838 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:43:48.972692  313838 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:43:48.972710  313838 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:43:48.972742  313838 start.go:360] acquireMachinesLock for kindnet-802249: {Name:mkbd43bf8515ac51e94479f10c515da678a2d966 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:43:48.972870  313838 start.go:364] duration metric: took 107.153µs to acquireMachinesLock for "kindnet-802249"
	I1217 00:43:48.972901  313838 start.go:93] Provisioning new machine with config: &{Name:kindnet-802249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-802249 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:43:48.972974  313838 start.go:125] createHost starting for "" (driver="docker")
	W1217 00:43:44.760180  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	W1217 00:43:47.273533  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	I1217 00:43:49.784559  307526 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1217 00:43:49.784642  307526 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:43:49.784765  307526 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:43:49.784840  307526 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 00:43:49.784881  307526 kubeadm.go:319] OS: Linux
	I1217 00:43:49.784955  307526 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:43:49.785040  307526 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:43:49.785108  307526 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:43:49.785187  307526 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:43:49.785261  307526 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:43:49.785314  307526 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:43:49.785358  307526 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:43:49.785396  307526 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 00:43:49.785478  307526 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:43:49.785573  307526 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:43:49.785652  307526 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:43:49.785710  307526 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:43:49.787201  307526 out.go:252]   - Generating certificates and keys ...
	I1217 00:43:49.787276  307526 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:43:49.787364  307526 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:43:49.787429  307526 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 00:43:49.787475  307526 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 00:43:49.787528  307526 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 00:43:49.787597  307526 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 00:43:49.787664  307526 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 00:43:49.787784  307526 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-802249 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 00:43:49.787884  307526 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 00:43:49.788045  307526 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-802249 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 00:43:49.788144  307526 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 00:43:49.788236  307526 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 00:43:49.788301  307526 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 00:43:49.788362  307526 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:43:49.788412  307526 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:43:49.788457  307526 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:43:49.788500  307526 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:43:49.788552  307526 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:43:49.788596  307526 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:43:49.788678  307526 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:43:49.788763  307526 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:43:49.790074  307526 out.go:252]   - Booting up control plane ...
	I1217 00:43:49.790183  307526 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:43:49.790288  307526 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:43:49.790399  307526 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:43:49.790508  307526 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:43:49.790639  307526 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:43:49.790791  307526 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:43:49.790914  307526 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:43:49.790971  307526 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:43:49.791150  307526 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:43:49.791278  307526 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:43:49.791370  307526 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.199672ms
	I1217 00:43:49.791497  307526 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 00:43:49.791613  307526 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1217 00:43:49.791733  307526 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 00:43:49.791846  307526 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 00:43:49.791961  307526 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.963819553s
	I1217 00:43:49.792139  307526 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.065065905s
	I1217 00:43:49.792238  307526 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001333371s
	I1217 00:43:49.792362  307526 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 00:43:49.792527  307526 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 00:43:49.792584  307526 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 00:43:49.792827  307526 kubeadm.go:319] [mark-control-plane] Marking the node auto-802249 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 00:43:49.792880  307526 kubeadm.go:319] [bootstrap-token] Using token: hewufi.jsvybo2i82bav9vd
	I1217 00:43:49.794192  307526 out.go:252]   - Configuring RBAC rules ...
	I1217 00:43:49.794372  307526 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 00:43:49.794485  307526 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 00:43:49.794694  307526 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 00:43:49.794899  307526 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 00:43:49.795139  307526 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 00:43:49.795280  307526 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 00:43:49.795467  307526 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 00:43:49.795531  307526 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 00:43:49.795601  307526 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 00:43:49.795610  307526 kubeadm.go:319] 
	I1217 00:43:49.795698  307526 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 00:43:49.795708  307526 kubeadm.go:319] 
	I1217 00:43:49.795824  307526 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 00:43:49.795834  307526 kubeadm.go:319] 
	I1217 00:43:49.795869  307526 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 00:43:49.795971  307526 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 00:43:49.796067  307526 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 00:43:49.796080  307526 kubeadm.go:319] 
	I1217 00:43:49.796159  307526 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 00:43:49.796168  307526 kubeadm.go:319] 
	I1217 00:43:49.796236  307526 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 00:43:49.796246  307526 kubeadm.go:319] 
	I1217 00:43:49.796323  307526 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 00:43:49.796430  307526 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 00:43:49.796545  307526 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 00:43:49.796556  307526 kubeadm.go:319] 
	I1217 00:43:49.796684  307526 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 00:43:49.796794  307526 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 00:43:49.796806  307526 kubeadm.go:319] 
	I1217 00:43:49.796928  307526 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token hewufi.jsvybo2i82bav9vd \
	I1217 00:43:49.797083  307526 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a7c34974519aee4953e03245da076d7a2eba06e40135880a85806e2dab303fa1 \
	I1217 00:43:49.797128  307526 kubeadm.go:319] 	--control-plane 
	I1217 00:43:49.797137  307526 kubeadm.go:319] 
	I1217 00:43:49.797280  307526 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 00:43:49.797299  307526 kubeadm.go:319] 
	I1217 00:43:49.797372  307526 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hewufi.jsvybo2i82bav9vd \
	I1217 00:43:49.797481  307526 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a7c34974519aee4953e03245da076d7a2eba06e40135880a85806e2dab303fa1 
	I1217 00:43:49.797510  307526 cni.go:84] Creating CNI manager for ""
	I1217 00:43:49.797519  307526 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:43:49.799489  307526 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 00:43:49.800531  307526 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 00:43:49.805278  307526 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1217 00:43:49.805294  307526 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1217 00:43:49.820140  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 00:43:50.071276  307526 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 00:43:50.071364  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:50.071389  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-802249 minikube.k8s.io/updated_at=2025_12_17T00_43_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1 minikube.k8s.io/name=auto-802249 minikube.k8s.io/primary=true
	I1217 00:43:50.083649  307526 ops.go:34] apiserver oom_adj: -16
	I1217 00:43:50.166785  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:50.667140  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:51.167625  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:51.667172  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:52.167662  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:52.667540  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1217 00:43:48.367908  301437 pod_ready.go:104] pod "coredns-66bc5c9577-vtspd" is not "Ready", error: <nil>
	W1217 00:43:50.368775  301437 pod_ready.go:104] pod "coredns-66bc5c9577-vtspd" is not "Ready", error: <nil>
	W1217 00:43:52.877476  301437 pod_ready.go:104] pod "coredns-66bc5c9577-vtspd" is not "Ready", error: <nil>
	I1217 00:43:48.974767  313838 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 00:43:48.975076  313838 start.go:159] libmachine.API.Create for "kindnet-802249" (driver="docker")
	I1217 00:43:48.975111  313838 client.go:173] LocalClient.Create starting
	I1217 00:43:48.975180  313838 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem
	I1217 00:43:48.975219  313838 main.go:143] libmachine: Decoding PEM data...
	I1217 00:43:48.975242  313838 main.go:143] libmachine: Parsing certificate...
	I1217 00:43:48.975326  313838 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem
	I1217 00:43:48.975352  313838 main.go:143] libmachine: Decoding PEM data...
	I1217 00:43:48.975383  313838 main.go:143] libmachine: Parsing certificate...
	I1217 00:43:48.975744  313838 cli_runner.go:164] Run: docker network inspect kindnet-802249 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 00:43:48.994403  313838 cli_runner.go:211] docker network inspect kindnet-802249 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 00:43:48.994483  313838 network_create.go:284] running [docker network inspect kindnet-802249] to gather additional debugging logs...
	I1217 00:43:48.994509  313838 cli_runner.go:164] Run: docker network inspect kindnet-802249
	W1217 00:43:49.017898  313838 cli_runner.go:211] docker network inspect kindnet-802249 returned with exit code 1
	I1217 00:43:49.017935  313838 network_create.go:287] error running [docker network inspect kindnet-802249]: docker network inspect kindnet-802249: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-802249 not found
	I1217 00:43:49.017964  313838 network_create.go:289] output of [docker network inspect kindnet-802249]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-802249 not found
	
	** /stderr **
	I1217 00:43:49.018078  313838 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:43:49.043478  313838 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ffd1d738f01 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3d:52:75:47:82} reservation:<nil>}
	I1217 00:43:49.044499  313838 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-280edd437675 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:ae:02:b5:f9:a6} reservation:<nil>}
	I1217 00:43:49.045635  313838 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9f28d049043c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:3f:8e:e9:44:56} reservation:<nil>}
	I1217 00:43:49.046382  313838 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a57026acfc12 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:aa:e6:32:39:49:3b} reservation:<nil>}
	I1217 00:43:49.047004  313838 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-a0b8f164bc66 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ae:bf:0f:c2:a1:7a} reservation:<nil>}
	I1217 00:43:49.047816  313838 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-2bf3b4bee687 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ce:d1:20:00:3e:43} reservation:<nil>}
	I1217 00:43:49.048945  313838 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f9aa00}
	I1217 00:43:49.048968  313838 network_create.go:124] attempt to create docker network kindnet-802249 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1217 00:43:49.049135  313838 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-802249 kindnet-802249
	I1217 00:43:49.098433  313838 network_create.go:108] docker network kindnet-802249 192.168.103.0/24 created
	I1217 00:43:49.098466  313838 kic.go:121] calculated static IP "192.168.103.2" for the "kindnet-802249" container
	I1217 00:43:49.098566  313838 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 00:43:49.116785  313838 cli_runner.go:164] Run: docker volume create kindnet-802249 --label name.minikube.sigs.k8s.io=kindnet-802249 --label created_by.minikube.sigs.k8s.io=true
	I1217 00:43:49.135373  313838 oci.go:103] Successfully created a docker volume kindnet-802249
	I1217 00:43:49.135457  313838 cli_runner.go:164] Run: docker run --rm --name kindnet-802249-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-802249 --entrypoint /usr/bin/test -v kindnet-802249:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 00:43:49.745394  313838 oci.go:107] Successfully prepared a docker volume kindnet-802249
	I1217 00:43:49.745467  313838 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:43:49.745481  313838 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 00:43:49.745541  313838 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-802249:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 00:43:53.730169  313838 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-802249:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.98455596s)
	I1217 00:43:53.730206  313838 kic.go:203] duration metric: took 3.984721772s to extract preloaded images to volume ...
	W1217 00:43:53.730319  313838 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 00:43:53.730365  313838 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 00:43:53.730427  313838 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	W1217 00:43:49.749486  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	W1217 00:43:51.749796  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	W1217 00:43:53.752706  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	I1217 00:43:53.167719  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:53.667433  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:54.167168  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:54.667183  307526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:43:54.749050  307526 kubeadm.go:1114] duration metric: took 4.677757562s to wait for elevateKubeSystemPrivileges
	I1217 00:43:54.749081  307526 kubeadm.go:403] duration metric: took 16.763588732s to StartCluster
	I1217 00:43:54.749106  307526 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:54.749184  307526 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:43:54.751400  307526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:54.751685  307526 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 00:43:54.751696  307526 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:43:54.751759  307526 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:43:54.751867  307526 addons.go:70] Setting storage-provisioner=true in profile "auto-802249"
	I1217 00:43:54.751878  307526 config.go:182] Loaded profile config "auto-802249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:54.751892  307526 addons.go:239] Setting addon storage-provisioner=true in "auto-802249"
	I1217 00:43:54.751924  307526 host.go:66] Checking if "auto-802249" exists ...
	I1217 00:43:54.751907  307526 addons.go:70] Setting default-storageclass=true in profile "auto-802249"
	I1217 00:43:54.751979  307526 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-802249"
	I1217 00:43:54.752435  307526 cli_runner.go:164] Run: docker container inspect auto-802249 --format={{.State.Status}}
	I1217 00:43:54.752612  307526 cli_runner.go:164] Run: docker container inspect auto-802249 --format={{.State.Status}}
	I1217 00:43:54.753065  307526 out.go:179] * Verifying Kubernetes components...
	I1217 00:43:54.754299  307526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:54.779505  307526 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:43:54.780431  307526 addons.go:239] Setting addon default-storageclass=true in "auto-802249"
	I1217 00:43:54.780480  307526 host.go:66] Checking if "auto-802249" exists ...
	I1217 00:43:54.780947  307526 cli_runner.go:164] Run: docker container inspect auto-802249 --format={{.State.Status}}
	I1217 00:43:54.782107  307526 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:43:54.782129  307526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:43:54.782180  307526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-802249
	I1217 00:43:54.811663  307526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/auto-802249/id_rsa Username:docker}
	I1217 00:43:54.812673  307526 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:43:54.812696  307526 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:43:54.812753  307526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-802249
	I1217 00:43:54.837149  307526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/auto-802249/id_rsa Username:docker}
	I1217 00:43:54.844552  307526 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 00:43:54.903756  307526 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:43:54.922070  307526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:43:54.948057  307526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:43:55.028215  307526 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1217 00:43:55.029798  307526 node_ready.go:35] waiting up to 15m0s for node "auto-802249" to be "Ready" ...
	I1217 00:43:55.214813  307526 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 00:43:53.795315  313838 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-802249 --name kindnet-802249 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-802249 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-802249 --network kindnet-802249 --ip 192.168.103.2 --volume kindnet-802249:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 00:43:54.089737  313838 cli_runner.go:164] Run: docker container inspect kindnet-802249 --format={{.State.Running}}
	I1217 00:43:54.109743  313838 cli_runner.go:164] Run: docker container inspect kindnet-802249 --format={{.State.Status}}
	I1217 00:43:54.128234  313838 cli_runner.go:164] Run: docker exec kindnet-802249 stat /var/lib/dpkg/alternatives/iptables
	I1217 00:43:54.175722  313838 oci.go:144] the created container "kindnet-802249" has a running status.
	I1217 00:43:54.175761  313838 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/kindnet-802249/id_rsa...
	I1217 00:43:54.389865  313838 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-12816/.minikube/machines/kindnet-802249/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 00:43:54.425293  313838 cli_runner.go:164] Run: docker container inspect kindnet-802249 --format={{.State.Status}}
	I1217 00:43:54.447980  313838 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 00:43:54.448028  313838 kic_runner.go:114] Args: [docker exec --privileged kindnet-802249 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 00:43:54.494770  313838 cli_runner.go:164] Run: docker container inspect kindnet-802249 --format={{.State.Status}}
	I1217 00:43:54.514773  313838 machine.go:94] provisionDockerMachine start ...
	I1217 00:43:54.514890  313838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-802249
	I1217 00:43:54.534052  313838 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:54.534346  313838 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1217 00:43:54.534363  313838 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:43:54.669837  313838 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-802249
	
	I1217 00:43:54.669877  313838 ubuntu.go:182] provisioning hostname "kindnet-802249"
	I1217 00:43:54.669940  313838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-802249
	I1217 00:43:54.694685  313838 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:54.695022  313838 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1217 00:43:54.695046  313838 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-802249 && echo "kindnet-802249" | sudo tee /etc/hostname
	I1217 00:43:54.850108  313838 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-802249
	
	I1217 00:43:54.850196  313838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-802249
	I1217 00:43:54.875628  313838 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:54.875936  313838 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1217 00:43:54.876020  313838 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-802249' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-802249/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-802249' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:43:55.012188  313838 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:43:55.012274  313838 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:43:55.012313  313838 ubuntu.go:190] setting up certificates
	I1217 00:43:55.012326  313838 provision.go:84] configureAuth start
	I1217 00:43:55.012399  313838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-802249
	I1217 00:43:55.034482  313838 provision.go:143] copyHostCerts
	I1217 00:43:55.034554  313838 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:43:55.034567  313838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:43:55.034650  313838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:43:55.034772  313838 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:43:55.034786  313838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:43:55.034834  313838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:43:55.035054  313838 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:43:55.035070  313838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:43:55.035120  313838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:43:55.035206  313838 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.kindnet-802249 san=[127.0.0.1 192.168.103.2 kindnet-802249 localhost minikube]
	I1217 00:43:55.117656  313838 provision.go:177] copyRemoteCerts
	I1217 00:43:55.117712  313838 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:43:55.117746  313838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-802249
	I1217 00:43:55.137183  313838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kindnet-802249/id_rsa Username:docker}
	I1217 00:43:55.234028  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:43:55.253417  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1217 00:43:55.270547  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 00:43:55.287665  313838 provision.go:87] duration metric: took 275.319952ms to configureAuth
	I1217 00:43:55.287692  313838 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:43:55.287875  313838 config.go:182] Loaded profile config "kindnet-802249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:43:55.287987  313838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-802249
	I1217 00:43:55.305860  313838 main.go:143] libmachine: Using SSH client type: native
	I1217 00:43:55.306170  313838 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1217 00:43:55.306189  313838 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:43:55.584605  313838 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:43:55.584630  313838 machine.go:97] duration metric: took 1.069834466s to provisionDockerMachine
	I1217 00:43:55.584643  313838 client.go:176] duration metric: took 6.609524374s to LocalClient.Create
	I1217 00:43:55.584660  313838 start.go:167] duration metric: took 6.609587903s to libmachine.API.Create "kindnet-802249"
	I1217 00:43:55.584668  313838 start.go:293] postStartSetup for "kindnet-802249" (driver="docker")
	I1217 00:43:55.584678  313838 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:43:55.584740  313838 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:43:55.584788  313838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-802249
	I1217 00:43:55.604929  313838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kindnet-802249/id_rsa Username:docker}
	I1217 00:43:55.700651  313838 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:43:55.704173  313838 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:43:55.704203  313838 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:43:55.704215  313838 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:43:55.704280  313838 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:43:55.704394  313838 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:43:55.704523  313838 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:43:55.712405  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:43:55.732675  313838 start.go:296] duration metric: took 147.993653ms for postStartSetup
	I1217 00:43:55.733034  313838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-802249
	I1217 00:43:55.753293  313838 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/config.json ...
	I1217 00:43:55.753544  313838 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:43:55.753589  313838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-802249
	I1217 00:43:55.772113  313838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kindnet-802249/id_rsa Username:docker}
	I1217 00:43:55.865735  313838 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:43:55.870560  313838 start.go:128] duration metric: took 6.89757341s to createHost
	I1217 00:43:55.870586  313838 start.go:83] releasing machines lock for "kindnet-802249", held for 6.897698079s
	I1217 00:43:55.870652  313838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-802249
	I1217 00:43:55.888931  313838 ssh_runner.go:195] Run: cat /version.json
	I1217 00:43:55.888974  313838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-802249
	I1217 00:43:55.889048  313838 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:43:55.889137  313838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-802249
	I1217 00:43:55.907766  313838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kindnet-802249/id_rsa Username:docker}
	I1217 00:43:55.908236  313838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kindnet-802249/id_rsa Username:docker}
	I1217 00:43:56.063118  313838 ssh_runner.go:195] Run: systemctl --version
	I1217 00:43:56.071048  313838 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:43:56.109942  313838 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:43:56.114629  313838 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:43:56.114685  313838 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:43:56.140879  313838 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 00:43:56.140901  313838 start.go:496] detecting cgroup driver to use...
	I1217 00:43:56.140931  313838 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:43:56.140975  313838 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:43:56.156326  313838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:43:56.168258  313838 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:43:56.168311  313838 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:43:56.184103  313838 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:43:56.200566  313838 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:43:56.287131  313838 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:43:56.381643  313838 docker.go:234] disabling docker service ...
	I1217 00:43:56.381698  313838 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:43:56.399619  313838 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:43:56.411910  313838 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:43:56.499371  313838 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:43:56.579754  313838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:43:56.591686  313838 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:43:56.605714  313838 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:43:56.605767  313838 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:56.615899  313838 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:43:56.615959  313838 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:56.624522  313838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:56.633018  313838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:56.641645  313838 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:43:56.649429  313838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:56.657893  313838 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:56.670703  313838 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:43:56.678946  313838 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:43:56.685849  313838 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:43:56.692753  313838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:56.773293  313838 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:43:56.913529  313838 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:43:56.913585  313838 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:43:56.917759  313838 start.go:564] Will wait 60s for crictl version
	I1217 00:43:56.917822  313838 ssh_runner.go:195] Run: which crictl
	I1217 00:43:56.921519  313838 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:43:56.946851  313838 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:43:56.946926  313838 ssh_runner.go:195] Run: crio --version
	I1217 00:43:56.975110  313838 ssh_runner.go:195] Run: crio --version
	I1217 00:43:57.003458  313838 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 00:43:55.215824  307526 addons.go:530] duration metric: took 464.062441ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 00:43:55.533201  307526 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-802249" context rescaled to 1 replicas
	W1217 00:43:57.033455  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	W1217 00:43:55.368021  301437 pod_ready.go:104] pod "coredns-66bc5c9577-vtspd" is not "Ready", error: <nil>
	W1217 00:43:57.867499  301437 pod_ready.go:104] pod "coredns-66bc5c9577-vtspd" is not "Ready", error: <nil>
	I1217 00:43:57.004572  313838 cli_runner.go:164] Run: docker network inspect kindnet-802249 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:43:57.022197  313838 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 00:43:57.026203  313838 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:43:57.037089  313838 kubeadm.go:884] updating cluster {Name:kindnet-802249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-802249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:43:57.037199  313838 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:43:57.037253  313838 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:43:57.066773  313838 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:43:57.066792  313838 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:43:57.066829  313838 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:43:57.093098  313838 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:43:57.093118  313838 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:43:57.093125  313838 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1217 00:43:57.093198  313838 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-802249 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-802249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1217 00:43:57.093256  313838 ssh_runner.go:195] Run: crio config
	I1217 00:43:57.138285  313838 cni.go:84] Creating CNI manager for "kindnet"
	I1217 00:43:57.138307  313838 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:43:57.138327  313838 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-802249 NodeName:kindnet-802249 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:43:57.138484  313838 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-802249"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:43:57.138554  313838 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 00:43:57.147488  313838 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:43:57.147546  313838 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:43:57.155310  313838 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I1217 00:43:57.168060  313838 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 00:43:57.182762  313838 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1217 00:43:57.194703  313838 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:43:57.198279  313838 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:43:57.208105  313838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:43:57.295736  313838 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:43:57.317230  313838 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249 for IP: 192.168.103.2
	I1217 00:43:57.317247  313838 certs.go:195] generating shared ca certs ...
	I1217 00:43:57.317262  313838 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:57.317399  313838 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:43:57.317443  313838 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:43:57.317454  313838 certs.go:257] generating profile certs ...
	I1217 00:43:57.317502  313838 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/client.key
	I1217 00:43:57.317520  313838 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/client.crt with IP's: []
	I1217 00:43:57.385797  313838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/client.crt ...
	I1217 00:43:57.385821  313838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/client.crt: {Name:mk92cfd9d4891400b003067e68b73bcb09e793e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:57.385974  313838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/client.key ...
	I1217 00:43:57.385985  313838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/client.key: {Name:mk2ffb7563fe2e2f01507fc0ee4dd7a5f8f6e92f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:57.386090  313838 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.key.57febc45
	I1217 00:43:57.386105  313838 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.crt.57febc45 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1217 00:43:57.510696  313838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.crt.57febc45 ...
	I1217 00:43:57.510719  313838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.crt.57febc45: {Name:mke771a4c891a72cb294df456787840407961416 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:57.510870  313838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.key.57febc45 ...
	I1217 00:43:57.510883  313838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.key.57febc45: {Name:mkdbea0af7cef26436635d9259f16a4be906b200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:57.510951  313838 certs.go:382] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.crt.57febc45 -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.crt
	I1217 00:43:57.511044  313838 certs.go:386] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.key.57febc45 -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.key
	I1217 00:43:57.511105  313838 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/proxy-client.key
	I1217 00:43:57.511119  313838 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/proxy-client.crt with IP's: []
	I1217 00:43:57.545154  313838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/proxy-client.crt ...
	I1217 00:43:57.545178  313838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/proxy-client.crt: {Name:mk6a57b3928a5e73a2b3cac1ff5564f5240dfb5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:57.545329  313838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/proxy-client.key ...
	I1217 00:43:57.545345  313838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/proxy-client.key: {Name:mk8cbf678d5d2979744f9ad4c4aed21830c25c1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:43:57.545533  313838 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:43:57.545575  313838 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:43:57.545587  313838 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:43:57.545622  313838 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:43:57.545652  313838 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:43:57.545679  313838 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:43:57.545725  313838 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:43:57.546443  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:43:57.564820  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:43:57.582885  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:43:57.600323  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:43:57.617632  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 00:43:57.634694  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:43:57.651197  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:43:57.668362  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kindnet-802249/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:43:57.684924  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:43:57.703858  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:43:57.720906  313838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:43:57.738178  313838 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:43:57.752021  313838 ssh_runner.go:195] Run: openssl version
	I1217 00:43:57.758718  313838 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16354.pem
	I1217 00:43:57.766189  313838 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16354.pem /etc/ssl/certs/16354.pem
	I1217 00:43:57.773438  313838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16354.pem
	I1217 00:43:57.777098  313838 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:13 /usr/share/ca-certificates/16354.pem
	I1217 00:43:57.777144  313838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16354.pem
	I1217 00:43:57.813283  313838 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:43:57.821198  313838 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16354.pem /etc/ssl/certs/51391683.0
	I1217 00:43:57.828616  313838 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:43:57.835921  313838 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:43:57.843347  313838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:43:57.847096  313838 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:43:57.847144  313838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	I1217 00:43:57.884282  313838 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:43:57.891543  313838 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/163542.pem /etc/ssl/certs/3ec20f2e.0
	I1217 00:43:57.898751  313838 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:57.906584  313838 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:43:57.913530  313838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:57.917541  313838 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:57.917590  313838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:43:57.952623  313838 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:43:57.960031  313838 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 00:43:57.967094  313838 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:43:57.970510  313838 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 00:43:57.970559  313838 kubeadm.go:401] StartCluster: {Name:kindnet-802249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-802249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:43:57.970619  313838 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:43:57.970665  313838 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:43:57.997411  313838 cri.go:89] found id: ""
	I1217 00:43:57.997467  313838 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:43:58.005542  313838 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:43:58.013743  313838 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:43:58.013822  313838 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:43:58.022216  313838 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:43:58.022257  313838 kubeadm.go:158] found existing configuration files:
	
	I1217 00:43:58.022304  313838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 00:43:58.030037  313838 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:43:58.030082  313838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:43:58.038143  313838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 00:43:58.046709  313838 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:43:58.046758  313838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:43:58.055580  313838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 00:43:58.064836  313838 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:43:58.064883  313838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:43:58.072259  313838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 00:43:58.079615  313838 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:43:58.079665  313838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:43:58.086672  313838 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:43:58.123646  313838 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1217 00:43:58.123708  313838 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:43:58.143710  313838 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:43:58.143791  313838 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 00:43:58.143850  313838 kubeadm.go:319] OS: Linux
	I1217 00:43:58.143937  313838 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:43:58.144008  313838 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:43:58.144084  313838 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:43:58.144124  313838 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:43:58.144192  313838 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:43:58.144253  313838 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:43:58.144341  313838 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:43:58.144419  313838 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 00:43:58.199791  313838 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:43:58.199983  313838 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:43:58.200137  313838 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:43:58.208178  313838 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:43:58.210887  313838 out.go:252]   - Generating certificates and keys ...
	I1217 00:43:58.211008  313838 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:43:58.211099  313838 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:43:58.585765  313838 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	W1217 00:43:56.249954  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	W1217 00:43:58.749379  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	I1217 00:43:58.868066  301437 pod_ready.go:94] pod "coredns-66bc5c9577-vtspd" is "Ready"
	I1217 00:43:58.868088  301437 pod_ready.go:86] duration metric: took 35.505656374s for pod "coredns-66bc5c9577-vtspd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:43:58.870625  301437 pod_ready.go:83] waiting for pod "etcd-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:43:58.874003  301437 pod_ready.go:94] pod "etcd-embed-certs-153232" is "Ready"
	I1217 00:43:58.874019  301437 pod_ready.go:86] duration metric: took 3.374352ms for pod "etcd-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:43:58.875921  301437 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:43:58.879161  301437 pod_ready.go:94] pod "kube-apiserver-embed-certs-153232" is "Ready"
	I1217 00:43:58.879179  301437 pod_ready.go:86] duration metric: took 3.241989ms for pod "kube-apiserver-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:43:58.880975  301437 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:43:59.066631  301437 pod_ready.go:94] pod "kube-controller-manager-embed-certs-153232" is "Ready"
	I1217 00:43:59.066655  301437 pod_ready.go:86] duration metric: took 185.632998ms for pod "kube-controller-manager-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:43:59.266673  301437 pod_ready.go:83] waiting for pod "kube-proxy-82b8k" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:43:59.665791  301437 pod_ready.go:94] pod "kube-proxy-82b8k" is "Ready"
	I1217 00:43:59.665819  301437 pod_ready.go:86] duration metric: took 399.116964ms for pod "kube-proxy-82b8k" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:43:59.866885  301437 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:00.266138  301437 pod_ready.go:94] pod "kube-scheduler-embed-certs-153232" is "Ready"
	I1217 00:44:00.266171  301437 pod_ready.go:86] duration metric: took 399.26082ms for pod "kube-scheduler-embed-certs-153232" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:00.266188  301437 pod_ready.go:40] duration metric: took 36.907957575s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:44:00.315611  301437 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1217 00:44:00.318195  301437 out.go:179] * Done! kubectl is now configured to use "embed-certs-153232" cluster and "default" namespace by default
	I1217 00:43:58.864406  313838 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 00:43:58.940351  313838 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 00:43:59.053118  313838 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 00:43:59.147896  313838 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 00:43:59.148091  313838 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-802249 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1217 00:43:59.522555  313838 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 00:43:59.522728  313838 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-802249 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1217 00:43:59.592047  313838 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 00:44:00.219192  313838 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 00:44:00.400112  313838 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 00:44:00.400255  313838 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:44:00.691745  313838 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:44:00.833794  313838 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:44:00.905110  313838 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:44:01.053550  313838 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:44:01.242847  313838 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:44:01.243519  313838 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:44:01.247610  313838 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1217 00:43:59.532905  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	W1217 00:44:02.032674  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	I1217 00:44:01.248940  313838 out.go:252]   - Booting up control plane ...
	I1217 00:44:01.249105  313838 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:44:01.249226  313838 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:44:01.249948  313838 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:44:01.264083  313838 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:44:01.264175  313838 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:44:01.270691  313838 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:44:01.271068  313838 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:44:01.271143  313838 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:44:01.368444  313838 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:44:01.368635  313838 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:44:02.369489  313838 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001755867s
	I1217 00:44:02.372487  313838 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 00:44:02.372606  313838 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1217 00:44:02.372722  313838 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 00:44:02.372852  313838 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 00:44:03.427929  313838 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.052397104s
	W1217 00:44:01.249224  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	W1217 00:44:03.752442  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	I1217 00:44:03.868822  313838 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.496304145s
	I1217 00:44:05.374156  313838 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.001561713s
	I1217 00:44:05.390166  313838 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 00:44:05.399767  313838 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 00:44:05.407445  313838 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 00:44:05.407721  313838 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-802249 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 00:44:05.415604  313838 kubeadm.go:319] [bootstrap-token] Using token: cs49lo.756orz921ne6woru
	I1217 00:44:05.416721  313838 out.go:252]   - Configuring RBAC rules ...
	I1217 00:44:05.416869  313838 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 00:44:05.420549  313838 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 00:44:05.425523  313838 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 00:44:05.427836  313838 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 00:44:05.430123  313838 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 00:44:05.433148  313838 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 00:44:05.780559  313838 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 00:44:06.194472  313838 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 00:44:06.780539  313838 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 00:44:06.781493  313838 kubeadm.go:319] 
	I1217 00:44:06.781588  313838 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 00:44:06.781600  313838 kubeadm.go:319] 
	I1217 00:44:06.781716  313838 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 00:44:06.781726  313838 kubeadm.go:319] 
	I1217 00:44:06.781756  313838 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 00:44:06.781838  313838 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 00:44:06.781892  313838 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 00:44:06.781897  313838 kubeadm.go:319] 
	I1217 00:44:06.781983  313838 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 00:44:06.782015  313838 kubeadm.go:319] 
	I1217 00:44:06.782067  313838 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 00:44:06.782077  313838 kubeadm.go:319] 
	I1217 00:44:06.782136  313838 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 00:44:06.782198  313838 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 00:44:06.782256  313838 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 00:44:06.782262  313838 kubeadm.go:319] 
	I1217 00:44:06.782368  313838 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 00:44:06.782472  313838 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 00:44:06.782487  313838 kubeadm.go:319] 
	I1217 00:44:06.782598  313838 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token cs49lo.756orz921ne6woru \
	I1217 00:44:06.782724  313838 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a7c34974519aee4953e03245da076d7a2eba06e40135880a85806e2dab303fa1 \
	I1217 00:44:06.782760  313838 kubeadm.go:319] 	--control-plane 
	I1217 00:44:06.782774  313838 kubeadm.go:319] 
	I1217 00:44:06.782881  313838 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 00:44:06.782889  313838 kubeadm.go:319] 
	I1217 00:44:06.783023  313838 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token cs49lo.756orz921ne6woru \
	I1217 00:44:06.783175  313838 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a7c34974519aee4953e03245da076d7a2eba06e40135880a85806e2dab303fa1 
	I1217 00:44:06.786022  313838 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 00:44:06.786111  313838 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:44:06.786132  313838 cni.go:84] Creating CNI manager for "kindnet"
	I1217 00:44:06.787727  313838 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1217 00:44:04.033110  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	W1217 00:44:06.033595  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	I1217 00:44:06.789157  313838 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 00:44:06.793369  313838 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1217 00:44:06.793384  313838 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1217 00:44:06.807960  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 00:44:07.015406  313838 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 00:44:07.015540  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-802249 minikube.k8s.io/updated_at=2025_12_17T00_44_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1 minikube.k8s.io/name=kindnet-802249 minikube.k8s.io/primary=true
	I1217 00:44:07.015561  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:44:07.026153  313838 ops.go:34] apiserver oom_adj: -16
	I1217 00:44:07.095769  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:44:07.596762  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:44:08.096600  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:44:08.596371  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1217 00:44:06.248953  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	W1217 00:44:08.249591  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	I1217 00:44:09.095823  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:44:09.596406  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:44:10.095948  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:44:10.596695  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:44:11.095801  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:44:11.596159  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:44:12.096735  313838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:44:12.171026  313838 kubeadm.go:1114] duration metric: took 5.155536062s to wait for elevateKubeSystemPrivileges
	I1217 00:44:12.171066  313838 kubeadm.go:403] duration metric: took 14.200507734s to StartCluster
	I1217 00:44:12.171088  313838 settings.go:142] acquiring lock: {Name:mk7d7632cd00ceda791845d793d841181ea8188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:44:12.171157  313838 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:44:12.172967  313838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/kubeconfig: {Name:mkd977475b39383babd1c1bcd25b2b3c1ea2d727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:44:12.173234  313838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 00:44:12.173250  313838 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:44:12.173233  313838 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:44:12.173327  313838 addons.go:70] Setting default-storageclass=true in profile "kindnet-802249"
	I1217 00:44:12.173344  313838 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-802249"
	I1217 00:44:12.173415  313838 config.go:182] Loaded profile config "kindnet-802249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:44:12.173321  313838 addons.go:70] Setting storage-provisioner=true in profile "kindnet-802249"
	I1217 00:44:12.173452  313838 addons.go:239] Setting addon storage-provisioner=true in "kindnet-802249"
	I1217 00:44:12.173497  313838 host.go:66] Checking if "kindnet-802249" exists ...
	I1217 00:44:12.173901  313838 cli_runner.go:164] Run: docker container inspect kindnet-802249 --format={{.State.Status}}
	I1217 00:44:12.174119  313838 cli_runner.go:164] Run: docker container inspect kindnet-802249 --format={{.State.Status}}
	I1217 00:44:12.175581  313838 out.go:179] * Verifying Kubernetes components...
	I1217 00:44:12.177061  313838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:44:12.196411  313838 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:44:12.196900  313838 addons.go:239] Setting addon default-storageclass=true in "kindnet-802249"
	I1217 00:44:12.196943  313838 host.go:66] Checking if "kindnet-802249" exists ...
	I1217 00:44:12.197440  313838 cli_runner.go:164] Run: docker container inspect kindnet-802249 --format={{.State.Status}}
	I1217 00:44:12.197519  313838 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:44:12.197539  313838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:44:12.197588  313838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-802249
	I1217 00:44:12.225604  313838 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:44:12.225629  313838 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:44:12.225672  313838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-802249
	I1217 00:44:12.225803  313838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kindnet-802249/id_rsa Username:docker}
	I1217 00:44:12.254222  313838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/kindnet-802249/id_rsa Username:docker}
	I1217 00:44:12.275749  313838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 00:44:12.345522  313838 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:44:12.350380  313838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:44:12.372208  313838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:44:12.505134  313838 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1217 00:44:12.506852  313838 node_ready.go:35] waiting up to 15m0s for node "kindnet-802249" to be "Ready" ...
	I1217 00:44:12.682077  313838 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1217 00:44:08.532510  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	W1217 00:44:10.532916  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	W1217 00:44:12.534707  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	I1217 00:44:12.683060  313838 addons.go:530] duration metric: took 509.805357ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 00:44:13.009768  313838 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-802249" context rescaled to 1 replicas
	W1217 00:44:10.749938  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	W1217 00:44:13.249160  306295 pod_ready.go:104] pod "coredns-66bc5c9577-v76f4" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 17 00:43:33 embed-certs-153232 crio[568]: time="2025-12-17T00:43:33.313677421Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 00:43:33 embed-certs-153232 crio[568]: time="2025-12-17T00:43:33.317636191Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 00:43:33 embed-certs-153232 crio[568]: time="2025-12-17T00:43:33.317656109Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 00:43:47 embed-certs-153232 crio[568]: time="2025-12-17T00:43:47.565885589Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6f22add4-ed3c-4ad6-86b9-6c89bb9b94e3 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:47 embed-certs-153232 crio[568]: time="2025-12-17T00:43:47.569936688Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7e024577-c8f1-4791-94fa-d1c70a185b93 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:47 embed-certs-153232 crio[568]: time="2025-12-17T00:43:47.573460726Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9pfwm/dashboard-metrics-scraper" id=c0b102d8-31a6-48e3-9b32-d789aacbc4ee name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:47 embed-certs-153232 crio[568]: time="2025-12-17T00:43:47.573599443Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:47 embed-certs-153232 crio[568]: time="2025-12-17T00:43:47.582458096Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:47 embed-certs-153232 crio[568]: time="2025-12-17T00:43:47.583127918Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:47 embed-certs-153232 crio[568]: time="2025-12-17T00:43:47.628031625Z" level=info msg="Created container 3d2c3aa6013510ed343b70dda91e1024e94192c440d8cb7aa743b80510c1917f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9pfwm/dashboard-metrics-scraper" id=c0b102d8-31a6-48e3-9b32-d789aacbc4ee name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:47 embed-certs-153232 crio[568]: time="2025-12-17T00:43:47.628719425Z" level=info msg="Starting container: 3d2c3aa6013510ed343b70dda91e1024e94192c440d8cb7aa743b80510c1917f" id=14e737b1-f6cf-4071-ba6f-a103928ff2eb name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:43:47 embed-certs-153232 crio[568]: time="2025-12-17T00:43:47.631442448Z" level=info msg="Started container" PID=1763 containerID=3d2c3aa6013510ed343b70dda91e1024e94192c440d8cb7aa743b80510c1917f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9pfwm/dashboard-metrics-scraper id=14e737b1-f6cf-4071-ba6f-a103928ff2eb name=/runtime.v1.RuntimeService/StartContainer sandboxID=8898d3657ffc648c2e23a2e3c84ba91090a624613ccfa7e399701cc6657c0761
	Dec 17 00:43:47 embed-certs-153232 crio[568]: time="2025-12-17T00:43:47.679758586Z" level=info msg="Removing container: a31ea9167311a0aaaa4fc8157542f43c53c0b4488e6d8118dc1dd2dee64b8e0c" id=8f462a94-0297-43d2-9a5b-583a3569ec98 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:43:47 embed-certs-153232 crio[568]: time="2025-12-17T00:43:47.693972935Z" level=info msg="Removed container a31ea9167311a0aaaa4fc8157542f43c53c0b4488e6d8118dc1dd2dee64b8e0c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9pfwm/dashboard-metrics-scraper" id=8f462a94-0297-43d2-9a5b-583a3569ec98 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:43:53 embed-certs-153232 crio[568]: time="2025-12-17T00:43:53.697691571Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c44c8f59-4acf-424f-afd1-f6adb9a8c014 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:53 embed-certs-153232 crio[568]: time="2025-12-17T00:43:53.700050041Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4775b348-5537-4b3c-8ca7-be0fd9c69944 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:43:53 embed-certs-153232 crio[568]: time="2025-12-17T00:43:53.706481838Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b22d5b04-b044-4d96-b873-fc91a656925d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:53 embed-certs-153232 crio[568]: time="2025-12-17T00:43:53.706618997Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:53 embed-certs-153232 crio[568]: time="2025-12-17T00:43:53.714376304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:53 embed-certs-153232 crio[568]: time="2025-12-17T00:43:53.714579822Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8b505f63b6dc7321183c52d8c3a8c90aa316d54ac6680e558659344294668f83/merged/etc/passwd: no such file or directory"
	Dec 17 00:43:53 embed-certs-153232 crio[568]: time="2025-12-17T00:43:53.714621469Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8b505f63b6dc7321183c52d8c3a8c90aa316d54ac6680e558659344294668f83/merged/etc/group: no such file or directory"
	Dec 17 00:43:53 embed-certs-153232 crio[568]: time="2025-12-17T00:43:53.714937129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:43:53 embed-certs-153232 crio[568]: time="2025-12-17T00:43:53.754462944Z" level=info msg="Created container 4aa28ef7b86e0ac2c8860e0731143889f5585d08d1c8e3092e5fdbae502d7645: kube-system/storage-provisioner/storage-provisioner" id=b22d5b04-b044-4d96-b873-fc91a656925d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:43:53 embed-certs-153232 crio[568]: time="2025-12-17T00:43:53.755507607Z" level=info msg="Starting container: 4aa28ef7b86e0ac2c8860e0731143889f5585d08d1c8e3092e5fdbae502d7645" id=27d8dc3d-3d31-4460-be6e-0a0bf3e535d4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:43:53 embed-certs-153232 crio[568]: time="2025-12-17T00:43:53.75803986Z" level=info msg="Started container" PID=1777 containerID=4aa28ef7b86e0ac2c8860e0731143889f5585d08d1c8e3092e5fdbae502d7645 description=kube-system/storage-provisioner/storage-provisioner id=27d8dc3d-3d31-4460-be6e-0a0bf3e535d4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=615b17a22f66a573db0f49677f28f795a1d05848ced32f6e454f6af3018ae915
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	4aa28ef7b86e0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   615b17a22f66a       storage-provisioner                          kube-system
	3d2c3aa601351       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago      Exited              dashboard-metrics-scraper   2                   8898d3657ffc6       dashboard-metrics-scraper-6ffb444bf9-9pfwm   kubernetes-dashboard
	d4b900c582c6a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   89f3d9a3a354d       kubernetes-dashboard-855c9754f9-472j2        kubernetes-dashboard
	932d916c8f226       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   6dbab6d186c3c       coredns-66bc5c9577-vtspd                     kube-system
	e8fd5e53eb9ce       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   169be801cfb10       busybox                                      default
	9e12fba8024ab       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   615b17a22f66a       storage-provisioner                          kube-system
	e8ac4e7470f94       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           53 seconds ago      Running             kube-proxy                  0                   d3a75bef2497b       kube-proxy-82b8k                             kube-system
	6301e99f54ccb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   714f068ea711e       kindnet-zffzt                                kube-system
	dadde2213b8a8       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           56 seconds ago      Running             kube-controller-manager     0                   b79095ab23e17       kube-controller-manager-embed-certs-153232   kube-system
	117e1e782a798       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           56 seconds ago      Running             etcd                        0                   c3a0485a6ea40       etcd-embed-certs-153232                      kube-system
	f3a000d40d6d7       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           56 seconds ago      Running             kube-scheduler              0                   a974f7cb0be76       kube-scheduler-embed-certs-153232            kube-system
	a770bc08061f9       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           56 seconds ago      Running             kube-apiserver              0                   51b381b17bdcf       kube-apiserver-embed-certs-153232            kube-system
	
	
	==> coredns [932d916c8f226125fbf4338249dcdb35a5f6d7adf40a1fb61934237d9cba3980] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42590 - 1415 "HINFO IN 902495801244066443.6793965876226482938. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.059098168s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-153232
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-153232
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=embed-certs-153232
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_42_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:42:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-153232
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:44:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:43:52 +0000   Wed, 17 Dec 2025 00:42:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:43:52 +0000   Wed, 17 Dec 2025 00:42:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:43:52 +0000   Wed, 17 Dec 2025 00:42:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 00:43:52 +0000   Wed, 17 Dec 2025 00:42:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-153232
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                5d400583-a23e-4e06-8ba1-0a6ece90e0c3
	  Boot ID:                    0e9cedc6-c46e-4354-b3d2-9272a8b33ae5
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-vtspd                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-embed-certs-153232                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-zffzt                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-embed-certs-153232             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-embed-certs-153232    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-82b8k                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-embed-certs-153232             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-9pfwm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-472j2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  112s               kubelet          Node embed-certs-153232 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s               kubelet          Node embed-certs-153232 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s               kubelet          Node embed-certs-153232 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s               node-controller  Node embed-certs-153232 event: Registered Node embed-certs-153232 in Controller
	  Normal  NodeReady                95s                kubelet          Node embed-certs-153232 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node embed-certs-153232 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node embed-certs-153232 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node embed-certs-153232 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node embed-certs-153232 event: Registered Node embed-certs-153232 in Controller
	
	
	==> dmesg <==
	[  +0.089382] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024236] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.864694] kauditd_printk_skb: 47 callbacks suppressed
	[Dec17 00:07] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.006904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +2.048755] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +4.030595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +8.447143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[ +16.382404] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000015] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[Dec17 00:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	
	
	==> etcd [117e1e782a79833091ca7f1a9da4be915158517d3d54c5674f3b4e0875f18cce] <==
	{"level":"warn","ts":"2025-12-17T00:43:21.052667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.060746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.072093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.076654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.085468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.094387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.102423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.113716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.122246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.130458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.139078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.146106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.154292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.162306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.171706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.183837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.189870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.197272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.205803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.213429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.234824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.242650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.249470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:21.303800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47620","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T00:43:52.667139Z","caller":"traceutil/trace.go:172","msg":"trace[890311236] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"117.46062ms","start":"2025-12-17T00:43:52.549660Z","end":"2025-12-17T00:43:52.667121Z","steps":["trace[890311236] 'process raft request'  (duration: 117.325339ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:44:16 up  1:26,  0 user,  load average: 3.15, 2.91, 2.02
	Linux embed-certs-153232 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6301e99f54ccbfcaa7a5dde58d324c165f0fe60d9d03ed0b9fa97c55700ac344] <==
	I1217 00:43:23.085153       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 00:43:23.085408       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1217 00:43:23.085555       1 main.go:148] setting mtu 1500 for CNI 
	I1217 00:43:23.085570       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 00:43:23.085590       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T00:43:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 00:43:23.378831       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 00:43:23.378863       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 00:43:23.378884       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 00:43:23.478257       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 00:43:23.747546       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 00:43:23.747580       1 metrics.go:72] Registering metrics
	I1217 00:43:23.747658       1 controller.go:711] "Syncing nftables rules"
	I1217 00:43:33.288730       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 00:43:33.288801       1 main.go:301] handling current node
	I1217 00:43:43.290099       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 00:43:43.290133       1 main.go:301] handling current node
	I1217 00:43:53.287868       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 00:43:53.287907       1 main.go:301] handling current node
	I1217 00:44:03.291465       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 00:44:03.291509       1 main.go:301] handling current node
	I1217 00:44:13.296130       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 00:44:13.296165       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a770bc08061f975f567cb7fb7cec6883ec6d5215d19863d7ddb2cc0049571d8b] <==
	I1217 00:43:21.789725       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 00:43:21.789738       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 00:43:21.790423       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 00:43:21.790436       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 00:43:21.790468       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 00:43:21.790579       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 00:43:21.800080       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 00:43:21.811283       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 00:43:21.811412       1 aggregator.go:171] initial CRD sync complete...
	I1217 00:43:21.811446       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 00:43:21.811471       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 00:43:21.811494       1 cache.go:39] Caches are synced for autoregister controller
	I1217 00:43:21.823858       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 00:43:21.843663       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 00:43:22.087388       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 00:43:22.116178       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 00:43:22.134397       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 00:43:22.150264       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 00:43:22.156390       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 00:43:22.187916       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.84.109"}
	I1217 00:43:22.196745       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.234.0"}
	I1217 00:43:22.692814       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 00:43:25.177916       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 00:43:25.523903       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 00:43:25.672837       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [dadde2213b8a894873343cf42602c1bedb001a3311bd9672a69d0fa4a07d9786] <==
	I1217 00:43:25.121252       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 00:43:25.121983       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1217 00:43:25.125986       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1217 00:43:25.127109       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1217 00:43:25.127177       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1217 00:43:25.127213       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 00:43:25.127217       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 00:43:25.127221       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 00:43:25.128248       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 00:43:25.128306       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 00:43:25.129487       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 00:43:25.129601       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 00:43:25.131877       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1217 00:43:25.133042       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 00:43:25.135314       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 00:43:25.135376       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 00:43:25.137520       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 00:43:25.138769       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 00:43:25.141097       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 00:43:25.143331       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 00:43:25.145606       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1217 00:43:25.145730       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1217 00:43:25.145817       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-153232"
	I1217 00:43:25.145890       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1217 00:43:25.157532       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [e8ac4e7470f9424e1e7541237e9c9cdc16aa75232ea66c1cdc71939466c64b0d] <==
	I1217 00:43:22.982655       1 server_linux.go:53] "Using iptables proxy"
	I1217 00:43:23.041714       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 00:43:23.142508       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 00:43:23.142547       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1217 00:43:23.142615       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 00:43:23.161300       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 00:43:23.161356       1 server_linux.go:132] "Using iptables Proxier"
	I1217 00:43:23.166342       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 00:43:23.166836       1 server.go:527] "Version info" version="v1.34.2"
	I1217 00:43:23.166874       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:43:23.168439       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 00:43:23.169088       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 00:43:23.169117       1 config.go:309] "Starting node config controller"
	I1217 00:43:23.169146       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 00:43:23.169156       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 00:43:23.168449       1 config.go:200] "Starting service config controller"
	I1217 00:43:23.169279       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 00:43:23.170980       1 config.go:106] "Starting endpoint slice config controller"
	I1217 00:43:23.171017       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 00:43:23.269350       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 00:43:23.270456       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 00:43:23.271667       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f3a000d40d6d7ebc54a27ecd08dc5aa3b530c6e66b7327ec3ec09941fca5d2ce] <==
	I1217 00:43:21.497943       1 serving.go:386] Generated self-signed cert in-memory
	I1217 00:43:22.129848       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1217 00:43:22.129888       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:43:22.134584       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1217 00:43:22.134629       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1217 00:43:22.134676       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:43:22.135171       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:43:22.134711       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 00:43:22.135049       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 00:43:22.135233       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 00:43:22.135066       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 00:43:22.235245       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1217 00:43:22.235252       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:43:22.237321       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Dec 17 00:43:25 embed-certs-153232 kubelet[733]: I1217 00:43:25.675375     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9f5811f0-bf00-4b4b-a326-a1e04c616776-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-472j2\" (UID: \"9f5811f0-bf00-4b4b-a326-a1e04c616776\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-472j2"
	Dec 17 00:43:25 embed-certs-153232 kubelet[733]: I1217 00:43:25.675404     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q676m\" (UniqueName: \"kubernetes.io/projected/9f5811f0-bf00-4b4b-a326-a1e04c616776-kube-api-access-q676m\") pod \"kubernetes-dashboard-855c9754f9-472j2\" (UID: \"9f5811f0-bf00-4b4b-a326-a1e04c616776\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-472j2"
	Dec 17 00:43:28 embed-certs-153232 kubelet[733]: I1217 00:43:28.613821     733 scope.go:117] "RemoveContainer" containerID="cc772cfb311a9881bcbe4f6ed1033793fa717f2e540bc07449315af49ef193b9"
	Dec 17 00:43:28 embed-certs-153232 kubelet[733]: I1217 00:43:28.747561     733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 17 00:43:29 embed-certs-153232 kubelet[733]: I1217 00:43:29.619076     733 scope.go:117] "RemoveContainer" containerID="cc772cfb311a9881bcbe4f6ed1033793fa717f2e540bc07449315af49ef193b9"
	Dec 17 00:43:29 embed-certs-153232 kubelet[733]: I1217 00:43:29.619443     733 scope.go:117] "RemoveContainer" containerID="a31ea9167311a0aaaa4fc8157542f43c53c0b4488e6d8118dc1dd2dee64b8e0c"
	Dec 17 00:43:29 embed-certs-153232 kubelet[733]: E1217 00:43:29.619624     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9pfwm_kubernetes-dashboard(657354ac-ce6a-4ee6-b133-99fa4afa1442)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9pfwm" podUID="657354ac-ce6a-4ee6-b133-99fa4afa1442"
	Dec 17 00:43:30 embed-certs-153232 kubelet[733]: I1217 00:43:30.623430     733 scope.go:117] "RemoveContainer" containerID="a31ea9167311a0aaaa4fc8157542f43c53c0b4488e6d8118dc1dd2dee64b8e0c"
	Dec 17 00:43:30 embed-certs-153232 kubelet[733]: E1217 00:43:30.623593     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9pfwm_kubernetes-dashboard(657354ac-ce6a-4ee6-b133-99fa4afa1442)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9pfwm" podUID="657354ac-ce6a-4ee6-b133-99fa4afa1442"
	Dec 17 00:43:35 embed-certs-153232 kubelet[733]: I1217 00:43:35.090631     733 scope.go:117] "RemoveContainer" containerID="a31ea9167311a0aaaa4fc8157542f43c53c0b4488e6d8118dc1dd2dee64b8e0c"
	Dec 17 00:43:35 embed-certs-153232 kubelet[733]: E1217 00:43:35.090883     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9pfwm_kubernetes-dashboard(657354ac-ce6a-4ee6-b133-99fa4afa1442)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9pfwm" podUID="657354ac-ce6a-4ee6-b133-99fa4afa1442"
	Dec 17 00:43:36 embed-certs-153232 kubelet[733]: I1217 00:43:36.758184     733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-472j2" podStartSLOduration=4.763643562 podStartE2EDuration="11.758162649s" podCreationTimestamp="2025-12-17 00:43:25 +0000 UTC" firstStartedPulling="2025-12-17 00:43:25.908865846 +0000 UTC m=+6.430578638" lastFinishedPulling="2025-12-17 00:43:32.903384932 +0000 UTC m=+13.425097725" observedRunningTime="2025-12-17 00:43:33.643777783 +0000 UTC m=+14.165490594" watchObservedRunningTime="2025-12-17 00:43:36.758162649 +0000 UTC m=+17.279875457"
	Dec 17 00:43:47 embed-certs-153232 kubelet[733]: I1217 00:43:47.565239     733 scope.go:117] "RemoveContainer" containerID="a31ea9167311a0aaaa4fc8157542f43c53c0b4488e6d8118dc1dd2dee64b8e0c"
	Dec 17 00:43:47 embed-certs-153232 kubelet[733]: I1217 00:43:47.677139     733 scope.go:117] "RemoveContainer" containerID="a31ea9167311a0aaaa4fc8157542f43c53c0b4488e6d8118dc1dd2dee64b8e0c"
	Dec 17 00:43:47 embed-certs-153232 kubelet[733]: I1217 00:43:47.677422     733 scope.go:117] "RemoveContainer" containerID="3d2c3aa6013510ed343b70dda91e1024e94192c440d8cb7aa743b80510c1917f"
	Dec 17 00:43:47 embed-certs-153232 kubelet[733]: E1217 00:43:47.677622     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9pfwm_kubernetes-dashboard(657354ac-ce6a-4ee6-b133-99fa4afa1442)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9pfwm" podUID="657354ac-ce6a-4ee6-b133-99fa4afa1442"
	Dec 17 00:43:53 embed-certs-153232 kubelet[733]: I1217 00:43:53.697207     733 scope.go:117] "RemoveContainer" containerID="9e12fba8024abfa61f00f5fe053cd5d50fccf8f0b0cd949bcff836ef6212ea59"
	Dec 17 00:43:55 embed-certs-153232 kubelet[733]: I1217 00:43:55.091558     733 scope.go:117] "RemoveContainer" containerID="3d2c3aa6013510ed343b70dda91e1024e94192c440d8cb7aa743b80510c1917f"
	Dec 17 00:43:55 embed-certs-153232 kubelet[733]: E1217 00:43:55.091784     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9pfwm_kubernetes-dashboard(657354ac-ce6a-4ee6-b133-99fa4afa1442)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9pfwm" podUID="657354ac-ce6a-4ee6-b133-99fa4afa1442"
	Dec 17 00:44:07 embed-certs-153232 kubelet[733]: I1217 00:44:07.564738     733 scope.go:117] "RemoveContainer" containerID="3d2c3aa6013510ed343b70dda91e1024e94192c440d8cb7aa743b80510c1917f"
	Dec 17 00:44:07 embed-certs-153232 kubelet[733]: E1217 00:44:07.564971     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9pfwm_kubernetes-dashboard(657354ac-ce6a-4ee6-b133-99fa4afa1442)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9pfwm" podUID="657354ac-ce6a-4ee6-b133-99fa4afa1442"
	Dec 17 00:44:12 embed-certs-153232 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 00:44:12 embed-certs-153232 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 00:44:12 embed-certs-153232 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:44:12 embed-certs-153232 systemd[1]: kubelet.service: Consumed 1.632s CPU time.
	
	
	==> kubernetes-dashboard [d4b900c582c6abc6c4d8c623e5365ca20e2f76c0980168c5652e9f834c43de48] <==
	2025/12/17 00:43:33 Using namespace: kubernetes-dashboard
	2025/12/17 00:43:33 Using in-cluster config to connect to apiserver
	2025/12/17 00:43:33 Using secret token for csrf signing
	2025/12/17 00:43:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 00:43:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 00:43:33 Successful initial request to the apiserver, version: v1.34.2
	2025/12/17 00:43:33 Generating JWE encryption key
	2025/12/17 00:43:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 00:43:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 00:43:33 Initializing JWE encryption key from synchronized object
	2025/12/17 00:43:33 Creating in-cluster Sidecar client
	2025/12/17 00:43:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 00:43:33 Serving insecurely on HTTP port: 9090
	2025/12/17 00:44:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 00:43:33 Starting overwatch
	
	
	==> storage-provisioner [4aa28ef7b86e0ac2c8860e0731143889f5585d08d1c8e3092e5fdbae502d7645] <==
	I1217 00:43:53.772586       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 00:43:53.781285       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 00:43:53.781335       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 00:43:53.783356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:43:57.240074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:01.500216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:05.098541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:08.152030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:11.174316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:11.179161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 00:44:11.179295       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 00:44:11.179358       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"edc5b1f6-fb4f-4962-9502-23926c96ec27", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-153232_1e248d9a-57f1-4723-a4a0-951eb4ec5313 became leader
	I1217 00:44:11.179412       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-153232_1e248d9a-57f1-4723-a4a0-951eb4ec5313!
	W1217 00:44:11.181885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:11.185211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 00:44:11.279589       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-153232_1e248d9a-57f1-4723-a4a0-951eb4ec5313!
	W1217 00:44:13.187766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:13.191558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:15.195762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:15.202011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9e12fba8024abfa61f00f5fe053cd5d50fccf8f0b0cd949bcff836ef6212ea59] <==
	I1217 00:43:22.947627       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 00:43:52.950438       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-153232 -n embed-certs-153232
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-153232 -n embed-certs-153232: exit status 2 (324.432124ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-153232 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-414413 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-414413 --alsologtostderr -v=1: exit status 80 (1.71779628s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-414413 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:44:27.931846  321448 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:44:27.932193  321448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:44:27.932207  321448 out.go:374] Setting ErrFile to fd 2...
	I1217 00:44:27.932214  321448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:44:27.932556  321448 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:44:27.932878  321448 out.go:368] Setting JSON to false
	I1217 00:44:27.932902  321448 mustload.go:66] Loading cluster: default-k8s-diff-port-414413
	I1217 00:44:27.933414  321448 config.go:182] Loaded profile config "default-k8s-diff-port-414413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:44:27.934016  321448 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-414413 --format={{.State.Status}}
	I1217 00:44:27.954308  321448 host.go:66] Checking if "default-k8s-diff-port-414413" exists ...
	I1217 00:44:27.954578  321448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:44:28.012520  321448 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-17 00:44:28.00258684 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:44:28.013225  321448 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-414413 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 00:44:28.014792  321448 out.go:179] * Pausing node default-k8s-diff-port-414413 ... 
	I1217 00:44:28.016322  321448 host.go:66] Checking if "default-k8s-diff-port-414413" exists ...
	I1217 00:44:28.016637  321448 ssh_runner.go:195] Run: systemctl --version
	I1217 00:44:28.016690  321448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-414413
	I1217 00:44:28.037033  321448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/default-k8s-diff-port-414413/id_rsa Username:docker}
	I1217 00:44:28.132087  321448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:44:28.165696  321448 pause.go:52] kubelet running: true
	I1217 00:44:28.165761  321448 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 00:44:28.344590  321448 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 00:44:28.344662  321448 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 00:44:28.420137  321448 cri.go:89] found id: "29dcc2e0fca01e5acc47fd9e7b42b73755a799de4a843cc7448f1cf3d24c1370"
	I1217 00:44:28.420166  321448 cri.go:89] found id: "b82b299f948d717658d6977755447250d679af51d1b6071b37f467e8810d95bf"
	I1217 00:44:28.420173  321448 cri.go:89] found id: "275d3d03f2346fc781571f2f61dc5d70168875e4ee6e2e5783f3893a19e24e67"
	I1217 00:44:28.420179  321448 cri.go:89] found id: "bbca296c30f3d3f0cca453021716cd6a26728333310fb6dfdeb35c44a6832375"
	I1217 00:44:28.420185  321448 cri.go:89] found id: "1b6d441ac73c0906999e2b074e7fb8e741006a82fa543a72336ae290aef62cf4"
	I1217 00:44:28.420195  321448 cri.go:89] found id: "2a7b291de067a5044f406eaa0104c52261424e3730e6c2e4d38864b41943eddd"
	I1217 00:44:28.420208  321448 cri.go:89] found id: "4dcc77a289bba808ececc2d4f0efa70e966e843b2057d6de5ad0054d0be435c8"
	I1217 00:44:28.420218  321448 cri.go:89] found id: "ba3df04c6b3feaf2f234a1a9b098c1269d844cdbaf6531304d6ddd40b10820d5"
	I1217 00:44:28.420230  321448 cri.go:89] found id: "eecadcae34c3698337c66c6d6dbab2066993e3216b64d194344407552bc449b5"
	I1217 00:44:28.420243  321448 cri.go:89] found id: "d2a2a6abdc96c42c27ab0c3e8b49c402a202b687de42012d4e22faf078a53746"
	I1217 00:44:28.420250  321448 cri.go:89] found id: "554f0df62e1c2a39c6dcfbc1c0ee65889b3ab428dc9ed21a3ca89b258910f564"
	I1217 00:44:28.420253  321448 cri.go:89] found id: ""
	I1217 00:44:28.420295  321448 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:44:28.431421  321448 retry.go:31] will retry after 210.627676ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:44:28Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:44:28.642910  321448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:44:28.656865  321448 pause.go:52] kubelet running: false
	I1217 00:44:28.656926  321448 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 00:44:28.801089  321448 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 00:44:28.801184  321448 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 00:44:28.873285  321448 cri.go:89] found id: "29dcc2e0fca01e5acc47fd9e7b42b73755a799de4a843cc7448f1cf3d24c1370"
	I1217 00:44:28.873306  321448 cri.go:89] found id: "b82b299f948d717658d6977755447250d679af51d1b6071b37f467e8810d95bf"
	I1217 00:44:28.873312  321448 cri.go:89] found id: "275d3d03f2346fc781571f2f61dc5d70168875e4ee6e2e5783f3893a19e24e67"
	I1217 00:44:28.873317  321448 cri.go:89] found id: "bbca296c30f3d3f0cca453021716cd6a26728333310fb6dfdeb35c44a6832375"
	I1217 00:44:28.873320  321448 cri.go:89] found id: "1b6d441ac73c0906999e2b074e7fb8e741006a82fa543a72336ae290aef62cf4"
	I1217 00:44:28.873324  321448 cri.go:89] found id: "2a7b291de067a5044f406eaa0104c52261424e3730e6c2e4d38864b41943eddd"
	I1217 00:44:28.873326  321448 cri.go:89] found id: "4dcc77a289bba808ececc2d4f0efa70e966e843b2057d6de5ad0054d0be435c8"
	I1217 00:44:28.873329  321448 cri.go:89] found id: "ba3df04c6b3feaf2f234a1a9b098c1269d844cdbaf6531304d6ddd40b10820d5"
	I1217 00:44:28.873332  321448 cri.go:89] found id: "eecadcae34c3698337c66c6d6dbab2066993e3216b64d194344407552bc449b5"
	I1217 00:44:28.873344  321448 cri.go:89] found id: "d2a2a6abdc96c42c27ab0c3e8b49c402a202b687de42012d4e22faf078a53746"
	I1217 00:44:28.873350  321448 cri.go:89] found id: "554f0df62e1c2a39c6dcfbc1c0ee65889b3ab428dc9ed21a3ca89b258910f564"
	I1217 00:44:28.873352  321448 cri.go:89] found id: ""
	I1217 00:44:28.873399  321448 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:44:28.884808  321448 retry.go:31] will retry after 443.82898ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:44:28Z" level=error msg="open /run/runc: no such file or directory"
	I1217 00:44:29.329488  321448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:44:29.343019  321448 pause.go:52] kubelet running: false
	I1217 00:44:29.343083  321448 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 00:44:29.495640  321448 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 00:44:29.495730  321448 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 00:44:29.562596  321448 cri.go:89] found id: "29dcc2e0fca01e5acc47fd9e7b42b73755a799de4a843cc7448f1cf3d24c1370"
	I1217 00:44:29.562618  321448 cri.go:89] found id: "b82b299f948d717658d6977755447250d679af51d1b6071b37f467e8810d95bf"
	I1217 00:44:29.562623  321448 cri.go:89] found id: "275d3d03f2346fc781571f2f61dc5d70168875e4ee6e2e5783f3893a19e24e67"
	I1217 00:44:29.562628  321448 cri.go:89] found id: "bbca296c30f3d3f0cca453021716cd6a26728333310fb6dfdeb35c44a6832375"
	I1217 00:44:29.562643  321448 cri.go:89] found id: "1b6d441ac73c0906999e2b074e7fb8e741006a82fa543a72336ae290aef62cf4"
	I1217 00:44:29.562649  321448 cri.go:89] found id: "2a7b291de067a5044f406eaa0104c52261424e3730e6c2e4d38864b41943eddd"
	I1217 00:44:29.562653  321448 cri.go:89] found id: "4dcc77a289bba808ececc2d4f0efa70e966e843b2057d6de5ad0054d0be435c8"
	I1217 00:44:29.562658  321448 cri.go:89] found id: "ba3df04c6b3feaf2f234a1a9b098c1269d844cdbaf6531304d6ddd40b10820d5"
	I1217 00:44:29.562663  321448 cri.go:89] found id: "eecadcae34c3698337c66c6d6dbab2066993e3216b64d194344407552bc449b5"
	I1217 00:44:29.562675  321448 cri.go:89] found id: "d2a2a6abdc96c42c27ab0c3e8b49c402a202b687de42012d4e22faf078a53746"
	I1217 00:44:29.562684  321448 cri.go:89] found id: "554f0df62e1c2a39c6dcfbc1c0ee65889b3ab428dc9ed21a3ca89b258910f564"
	I1217 00:44:29.562689  321448 cri.go:89] found id: ""
	I1217 00:44:29.562731  321448 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:44:29.576041  321448 out.go:203] 
	W1217 00:44:29.577242  321448 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:44:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:44:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:44:29.577262  321448 out.go:285] * 
	* 
	W1217 00:44:29.581458  321448 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:44:29.582429  321448 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-414413 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-414413
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-414413:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "32e520445c9ef469b69a7cfa94fa07b2c047bc072eab1f9bd789716ea62b2b17",
	        "Created": "2025-12-17T00:42:18.411894947Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 306548,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:43:24.886655377Z",
	            "FinishedAt": "2025-12-17T00:43:23.963069349Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/32e520445c9ef469b69a7cfa94fa07b2c047bc072eab1f9bd789716ea62b2b17/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/32e520445c9ef469b69a7cfa94fa07b2c047bc072eab1f9bd789716ea62b2b17/hostname",
	        "HostsPath": "/var/lib/docker/containers/32e520445c9ef469b69a7cfa94fa07b2c047bc072eab1f9bd789716ea62b2b17/hosts",
	        "LogPath": "/var/lib/docker/containers/32e520445c9ef469b69a7cfa94fa07b2c047bc072eab1f9bd789716ea62b2b17/32e520445c9ef469b69a7cfa94fa07b2c047bc072eab1f9bd789716ea62b2b17-json.log",
	        "Name": "/default-k8s-diff-port-414413",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-414413:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-414413",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "32e520445c9ef469b69a7cfa94fa07b2c047bc072eab1f9bd789716ea62b2b17",
	                "LowerDir": "/var/lib/docker/overlay2/f63ae4354f75340680ea6735a9f2526da1a4c2e021a8a8e10a3b649ecbc014e0-init/diff:/var/lib/docker/overlay2/594b812fd6d8db89dab322ea9e00d43dd555e9709fb5e6953e3873cce717392c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f63ae4354f75340680ea6735a9f2526da1a4c2e021a8a8e10a3b649ecbc014e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f63ae4354f75340680ea6735a9f2526da1a4c2e021a8a8e10a3b649ecbc014e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f63ae4354f75340680ea6735a9f2526da1a4c2e021a8a8e10a3b649ecbc014e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-414413",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-414413/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-414413",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-414413",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-414413",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "09c00b839672fde06dfd4934d198cef1659e7eb358cb0a9f8913ae9ff66d80c2",
	            "SandboxKey": "/var/run/docker/netns/09c00b839672",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-414413": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a57026acfc125e7890d2c5444987c6f9f2a024f5d99a4bf5d6821c92ba08cc07",
	                    "EndpointID": "044fbcb88ff5df9641dce86e342b82fd42526e568ac0ce22405f7b85d5d3ba97",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "32:be:ad:82:22:b4",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-414413",
	                        "32e520445c9e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-414413 -n default-k8s-diff-port-414413
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-414413 -n default-k8s-diff-port-414413: exit status 2 (316.97817ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-414413 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-414413 logs -n 25: (1.051053517s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ stop    │ -p newest-cni-653717 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ stop    │ -p default-k8s-diff-port-414413 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable dashboard -p newest-cni-653717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p newest-cni-653717 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-153232 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p embed-certs-153232 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:44 UTC │
	│ image   │ newest-cni-653717 image list --format=json                                                                                                                                                                                                           │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ pause   │ -p newest-cni-653717 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-414413 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p default-k8s-diff-port-414413 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:44 UTC │
	│ delete  │ -p newest-cni-653717                                                                                                                                                                                                                                 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ delete  │ -p newest-cni-653717                                                                                                                                                                                                                                 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p auto-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-802249                  │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ image   │ no-preload-864613 image list --format=json                                                                                                                                                                                                           │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ pause   │ -p no-preload-864613 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ delete  │ -p no-preload-864613                                                                                                                                                                                                                                 │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ delete  │ -p no-preload-864613                                                                                                                                                                                                                                 │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p kindnet-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                             │ kindnet-802249               │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:44 UTC │
	│ image   │ embed-certs-153232 image list --format=json                                                                                                                                                                                                          │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:44 UTC │ 17 Dec 25 00:44 UTC │
	│ pause   │ -p embed-certs-153232 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:44 UTC │                     │
	│ delete  │ -p embed-certs-153232                                                                                                                                                                                                                                │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:44 UTC │ 17 Dec 25 00:44 UTC │
	│ delete  │ -p embed-certs-153232                                                                                                                                                                                                                                │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:44 UTC │ 17 Dec 25 00:44 UTC │
	│ start   │ -p calico-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                                                                                                               │ calico-802249                │ jenkins │ v1.37.0 │ 17 Dec 25 00:44 UTC │                     │
	│ image   │ default-k8s-diff-port-414413 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:44 UTC │ 17 Dec 25 00:44 UTC │
	│ pause   │ -p default-k8s-diff-port-414413 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:44 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:44:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:44:20.394476  319582 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:44:20.394713  319582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:44:20.394721  319582 out.go:374] Setting ErrFile to fd 2...
	I1217 00:44:20.394726  319582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:44:20.394900  319582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:44:20.395351  319582 out.go:368] Setting JSON to false
	I1217 00:44:20.396505  319582 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5210,"bootTime":1765927050,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:44:20.396553  319582 start.go:143] virtualization: kvm guest
	I1217 00:44:20.398507  319582 out.go:179] * [calico-802249] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:44:20.399696  319582 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:44:20.399711  319582 notify.go:221] Checking for updates...
	I1217 00:44:20.402588  319582 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:44:20.404189  319582 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:44:20.405312  319582 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:44:20.406298  319582 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:44:20.407348  319582 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:44:20.408721  319582 config.go:182] Loaded profile config "auto-802249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:44:20.408841  319582 config.go:182] Loaded profile config "default-k8s-diff-port-414413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:44:20.408941  319582 config.go:182] Loaded profile config "kindnet-802249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:44:20.409097  319582 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:44:20.432664  319582 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:44:20.432818  319582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:44:20.488291  319582 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-17 00:44:20.478115459 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:44:20.488404  319582 docker.go:319] overlay module found
	I1217 00:44:20.489986  319582 out.go:179] * Using the docker driver based on user configuration
	I1217 00:44:20.491187  319582 start.go:309] selected driver: docker
	I1217 00:44:20.491205  319582 start.go:927] validating driver "docker" against <nil>
	I1217 00:44:20.491219  319582 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:44:20.492102  319582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:44:20.549295  319582 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-17 00:44:20.539290454 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:44:20.549435  319582 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:44:20.549704  319582 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:44:20.551178  319582 out.go:179] * Using Docker driver with root privileges
	I1217 00:44:20.552397  319582 cni.go:84] Creating CNI manager for "calico"
	I1217 00:44:20.552420  319582 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1217 00:44:20.552502  319582 start.go:353] cluster config:
	{Name:calico-802249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-802249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:44:20.553784  319582 out.go:179] * Starting "calico-802249" primary control-plane node in "calico-802249" cluster
	I1217 00:44:20.554852  319582 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:44:20.556015  319582 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:44:20.557065  319582 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:44:20.557097  319582 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1217 00:44:20.557108  319582 cache.go:65] Caching tarball of preloaded images
	I1217 00:44:20.557164  319582 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:44:20.557207  319582 preload.go:238] Found /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 00:44:20.557222  319582 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 00:44:20.557323  319582 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/config.json ...
	I1217 00:44:20.557355  319582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/config.json: {Name:mk6d81e1e5e976b995c9f4a77bc824df3c821922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:44:20.577810  319582 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:44:20.577826  319582 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:44:20.577843  319582 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:44:20.577880  319582 start.go:360] acquireMachinesLock for calico-802249: {Name:mk66f9af13fbe38f7686efc64dcebf8f2643e35c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:44:20.577967  319582 start.go:364] duration metric: took 73.471µs to acquireMachinesLock for "calico-802249"
	I1217 00:44:20.578007  319582 start.go:93] Provisioning new machine with config: &{Name:calico-802249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-802249 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:44:20.578070  319582 start.go:125] createHost starting for "" (driver="docker")
	W1217 00:44:19.532393  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	W1217 00:44:21.533119  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	W1217 00:44:19.010486  313838 node_ready.go:57] node "kindnet-802249" has "Ready":"False" status (will retry)
	W1217 00:44:21.510710  313838 node_ready.go:57] node "kindnet-802249" has "Ready":"False" status (will retry)
	I1217 00:44:23.010272  313838 node_ready.go:49] node "kindnet-802249" is "Ready"
	I1217 00:44:23.010301  313838 node_ready.go:38] duration metric: took 10.503419159s for node "kindnet-802249" to be "Ready" ...
	I1217 00:44:23.010316  313838 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:44:23.010366  313838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:23.023570  313838 api_server.go:72] duration metric: took 10.850225274s to wait for apiserver process to appear ...
	I1217 00:44:23.023598  313838 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:44:23.023617  313838 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:44:23.029047  313838 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1217 00:44:23.030072  313838 api_server.go:141] control plane version: v1.34.2
	I1217 00:44:23.030101  313838 api_server.go:131] duration metric: took 6.495577ms to wait for apiserver health ...
	I1217 00:44:23.030111  313838 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:44:23.035470  313838 system_pods.go:59] 8 kube-system pods found
	I1217 00:44:23.035510  313838 system_pods.go:61] "coredns-66bc5c9577-7p575" [06ee85a9-892e-40f5-adf9-42882599366f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:44:23.035524  313838 system_pods.go:61] "etcd-kindnet-802249" [a9cfcb70-d987-439c-8579-c1111de52883] Running
	I1217 00:44:23.035539  313838 system_pods.go:61] "kindnet-c6fx4" [fc92c1bc-53fb-49d3-92c2-0c698d1961fb] Running
	I1217 00:44:23.035549  313838 system_pods.go:61] "kube-apiserver-kindnet-802249" [7714e874-38d7-44b5-91f6-28244fbd6e7b] Running
	I1217 00:44:23.035558  313838 system_pods.go:61] "kube-controller-manager-kindnet-802249" [8367fc83-e848-414e-868e-26e124b5399c] Running
	I1217 00:44:23.035569  313838 system_pods.go:61] "kube-proxy-zgfw2" [f3bb7dc0-0c90-4a78-a297-06910340ef6d] Running
	I1217 00:44:23.035579  313838 system_pods.go:61] "kube-scheduler-kindnet-802249" [e873311a-3af1-4901-9080-7e37437a542a] Running
	I1217 00:44:23.035592  313838 system_pods.go:61] "storage-provisioner" [8075127e-89ff-4d27-b16d-5d615bb18953] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:44:23.035604  313838 system_pods.go:74] duration metric: took 5.486672ms to wait for pod list to return data ...
	I1217 00:44:23.035629  313838 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:44:23.038123  313838 default_sa.go:45] found service account: "default"
	I1217 00:44:23.038148  313838 default_sa.go:55] duration metric: took 2.505084ms for default service account to be created ...
	I1217 00:44:23.038157  313838 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 00:44:23.040441  313838 system_pods.go:86] 8 kube-system pods found
	I1217 00:44:23.040474  313838 system_pods.go:89] "coredns-66bc5c9577-7p575" [06ee85a9-892e-40f5-adf9-42882599366f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:44:23.040481  313838 system_pods.go:89] "etcd-kindnet-802249" [a9cfcb70-d987-439c-8579-c1111de52883] Running
	I1217 00:44:23.040488  313838 system_pods.go:89] "kindnet-c6fx4" [fc92c1bc-53fb-49d3-92c2-0c698d1961fb] Running
	I1217 00:44:23.040494  313838 system_pods.go:89] "kube-apiserver-kindnet-802249" [7714e874-38d7-44b5-91f6-28244fbd6e7b] Running
	I1217 00:44:23.040500  313838 system_pods.go:89] "kube-controller-manager-kindnet-802249" [8367fc83-e848-414e-868e-26e124b5399c] Running
	I1217 00:44:23.040507  313838 system_pods.go:89] "kube-proxy-zgfw2" [f3bb7dc0-0c90-4a78-a297-06910340ef6d] Running
	I1217 00:44:23.040513  313838 system_pods.go:89] "kube-scheduler-kindnet-802249" [e873311a-3af1-4901-9080-7e37437a542a] Running
	I1217 00:44:23.040520  313838 system_pods.go:89] "storage-provisioner" [8075127e-89ff-4d27-b16d-5d615bb18953] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:44:23.040546  313838 retry.go:31] will retry after 294.943902ms: missing components: kube-dns
	I1217 00:44:23.339316  313838 system_pods.go:86] 8 kube-system pods found
	I1217 00:44:23.339347  313838 system_pods.go:89] "coredns-66bc5c9577-7p575" [06ee85a9-892e-40f5-adf9-42882599366f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:44:23.339353  313838 system_pods.go:89] "etcd-kindnet-802249" [a9cfcb70-d987-439c-8579-c1111de52883] Running
	I1217 00:44:23.339359  313838 system_pods.go:89] "kindnet-c6fx4" [fc92c1bc-53fb-49d3-92c2-0c698d1961fb] Running
	I1217 00:44:23.339365  313838 system_pods.go:89] "kube-apiserver-kindnet-802249" [7714e874-38d7-44b5-91f6-28244fbd6e7b] Running
	I1217 00:44:23.339369  313838 system_pods.go:89] "kube-controller-manager-kindnet-802249" [8367fc83-e848-414e-868e-26e124b5399c] Running
	I1217 00:44:23.339373  313838 system_pods.go:89] "kube-proxy-zgfw2" [f3bb7dc0-0c90-4a78-a297-06910340ef6d] Running
	I1217 00:44:23.339376  313838 system_pods.go:89] "kube-scheduler-kindnet-802249" [e873311a-3af1-4901-9080-7e37437a542a] Running
	I1217 00:44:23.339381  313838 system_pods.go:89] "storage-provisioner" [8075127e-89ff-4d27-b16d-5d615bb18953] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:44:23.339397  313838 retry.go:31] will retry after 384.557211ms: missing components: kube-dns
	I1217 00:44:23.728486  313838 system_pods.go:86] 8 kube-system pods found
	I1217 00:44:23.728524  313838 system_pods.go:89] "coredns-66bc5c9577-7p575" [06ee85a9-892e-40f5-adf9-42882599366f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:44:23.728532  313838 system_pods.go:89] "etcd-kindnet-802249" [a9cfcb70-d987-439c-8579-c1111de52883] Running
	I1217 00:44:23.728541  313838 system_pods.go:89] "kindnet-c6fx4" [fc92c1bc-53fb-49d3-92c2-0c698d1961fb] Running
	I1217 00:44:23.728547  313838 system_pods.go:89] "kube-apiserver-kindnet-802249" [7714e874-38d7-44b5-91f6-28244fbd6e7b] Running
	I1217 00:44:23.728553  313838 system_pods.go:89] "kube-controller-manager-kindnet-802249" [8367fc83-e848-414e-868e-26e124b5399c] Running
	I1217 00:44:23.728563  313838 system_pods.go:89] "kube-proxy-zgfw2" [f3bb7dc0-0c90-4a78-a297-06910340ef6d] Running
	I1217 00:44:23.728568  313838 system_pods.go:89] "kube-scheduler-kindnet-802249" [e873311a-3af1-4901-9080-7e37437a542a] Running
	I1217 00:44:23.728580  313838 system_pods.go:89] "storage-provisioner" [8075127e-89ff-4d27-b16d-5d615bb18953] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:44:23.728602  313838 retry.go:31] will retry after 335.761178ms: missing components: kube-dns
	I1217 00:44:20.580256  319582 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 00:44:20.580434  319582 start.go:159] libmachine.API.Create for "calico-802249" (driver="docker")
	I1217 00:44:20.580461  319582 client.go:173] LocalClient.Create starting
	I1217 00:44:20.580542  319582 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem
	I1217 00:44:20.580571  319582 main.go:143] libmachine: Decoding PEM data...
	I1217 00:44:20.580591  319582 main.go:143] libmachine: Parsing certificate...
	I1217 00:44:20.580633  319582 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem
	I1217 00:44:20.580655  319582 main.go:143] libmachine: Decoding PEM data...
	I1217 00:44:20.580667  319582 main.go:143] libmachine: Parsing certificate...
	I1217 00:44:20.580985  319582 cli_runner.go:164] Run: docker network inspect calico-802249 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 00:44:20.596841  319582 cli_runner.go:211] docker network inspect calico-802249 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 00:44:20.596893  319582 network_create.go:284] running [docker network inspect calico-802249] to gather additional debugging logs...
	I1217 00:44:20.596907  319582 cli_runner.go:164] Run: docker network inspect calico-802249
	W1217 00:44:20.613275  319582 cli_runner.go:211] docker network inspect calico-802249 returned with exit code 1
	I1217 00:44:20.613310  319582 network_create.go:287] error running [docker network inspect calico-802249]: docker network inspect calico-802249: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-802249 not found
	I1217 00:44:20.613323  319582 network_create.go:289] output of [docker network inspect calico-802249]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-802249 not found
	
	** /stderr **
	I1217 00:44:20.613402  319582 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:44:20.630393  319582 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ffd1d738f01 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3d:52:75:47:82} reservation:<nil>}
	I1217 00:44:20.631199  319582 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-280edd437675 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:ae:02:b5:f9:a6} reservation:<nil>}
	I1217 00:44:20.631966  319582 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9f28d049043c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:3f:8e:e9:44:56} reservation:<nil>}
	I1217 00:44:20.632466  319582 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a57026acfc12 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:aa:e6:32:39:49:3b} reservation:<nil>}
	I1217 00:44:20.633223  319582 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f44be0}
	I1217 00:44:20.633246  319582 network_create.go:124] attempt to create docker network calico-802249 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1217 00:44:20.633292  319582 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-802249 calico-802249
	I1217 00:44:20.681033  319582 network_create.go:108] docker network calico-802249 192.168.85.0/24 created
	I1217 00:44:20.681059  319582 kic.go:121] calculated static IP "192.168.85.2" for the "calico-802249" container
	I1217 00:44:20.681121  319582 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 00:44:20.697843  319582 cli_runner.go:164] Run: docker volume create calico-802249 --label name.minikube.sigs.k8s.io=calico-802249 --label created_by.minikube.sigs.k8s.io=true
	I1217 00:44:20.716485  319582 oci.go:103] Successfully created a docker volume calico-802249
	I1217 00:44:20.716567  319582 cli_runner.go:164] Run: docker run --rm --name calico-802249-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-802249 --entrypoint /usr/bin/test -v calico-802249:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 00:44:21.154615  319582 oci.go:107] Successfully prepared a docker volume calico-802249
	I1217 00:44:21.154683  319582 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:44:21.154696  319582 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 00:44:21.154746  319582 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-802249:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 00:44:25.037433  319582 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-802249:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.882638898s)
	I1217 00:44:25.037469  319582 kic.go:203] duration metric: took 3.88276888s to extract preloaded images to volume ...
	W1217 00:44:25.037592  319582 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 00:44:25.037634  319582 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 00:44:25.037684  319582 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 00:44:25.093494  319582 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-802249 --name calico-802249 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-802249 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-802249 --network calico-802249 --ip 192.168.85.2 --volume calico-802249:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 00:44:25.366011  319582 cli_runner.go:164] Run: docker container inspect calico-802249 --format={{.State.Running}}
	I1217 00:44:25.384191  319582 cli_runner.go:164] Run: docker container inspect calico-802249 --format={{.State.Status}}
	I1217 00:44:24.278427  313838 system_pods.go:86] 8 kube-system pods found
	I1217 00:44:24.278470  313838 system_pods.go:89] "coredns-66bc5c9577-7p575" [06ee85a9-892e-40f5-adf9-42882599366f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:44:24.278479  313838 system_pods.go:89] "etcd-kindnet-802249" [a9cfcb70-d987-439c-8579-c1111de52883] Running
	I1217 00:44:24.278486  313838 system_pods.go:89] "kindnet-c6fx4" [fc92c1bc-53fb-49d3-92c2-0c698d1961fb] Running
	I1217 00:44:24.278492  313838 system_pods.go:89] "kube-apiserver-kindnet-802249" [7714e874-38d7-44b5-91f6-28244fbd6e7b] Running
	I1217 00:44:24.278497  313838 system_pods.go:89] "kube-controller-manager-kindnet-802249" [8367fc83-e848-414e-868e-26e124b5399c] Running
	I1217 00:44:24.278503  313838 system_pods.go:89] "kube-proxy-zgfw2" [f3bb7dc0-0c90-4a78-a297-06910340ef6d] Running
	I1217 00:44:24.278509  313838 system_pods.go:89] "kube-scheduler-kindnet-802249" [e873311a-3af1-4901-9080-7e37437a542a] Running
	I1217 00:44:24.278516  313838 system_pods.go:89] "storage-provisioner" [8075127e-89ff-4d27-b16d-5d615bb18953] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:44:24.278536  313838 retry.go:31] will retry after 418.539702ms: missing components: kube-dns
	I1217 00:44:24.710200  313838 system_pods.go:86] 8 kube-system pods found
	I1217 00:44:24.710235  313838 system_pods.go:89] "coredns-66bc5c9577-7p575" [06ee85a9-892e-40f5-adf9-42882599366f] Running
	I1217 00:44:24.710243  313838 system_pods.go:89] "etcd-kindnet-802249" [a9cfcb70-d987-439c-8579-c1111de52883] Running
	I1217 00:44:24.710248  313838 system_pods.go:89] "kindnet-c6fx4" [fc92c1bc-53fb-49d3-92c2-0c698d1961fb] Running
	I1217 00:44:24.710253  313838 system_pods.go:89] "kube-apiserver-kindnet-802249" [7714e874-38d7-44b5-91f6-28244fbd6e7b] Running
	I1217 00:44:24.710259  313838 system_pods.go:89] "kube-controller-manager-kindnet-802249" [8367fc83-e848-414e-868e-26e124b5399c] Running
	I1217 00:44:24.710264  313838 system_pods.go:89] "kube-proxy-zgfw2" [f3bb7dc0-0c90-4a78-a297-06910340ef6d] Running
	I1217 00:44:24.710272  313838 system_pods.go:89] "kube-scheduler-kindnet-802249" [e873311a-3af1-4901-9080-7e37437a542a] Running
	I1217 00:44:24.710278  313838 system_pods.go:89] "storage-provisioner" [8075127e-89ff-4d27-b16d-5d615bb18953] Running
	I1217 00:44:24.710288  313838 system_pods.go:126] duration metric: took 1.672124288s to wait for k8s-apps to be running ...
	I1217 00:44:24.710303  313838 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 00:44:24.710355  313838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:44:24.723823  313838 system_svc.go:56] duration metric: took 13.498592ms WaitForService to wait for kubelet
	I1217 00:44:24.723849  313838 kubeadm.go:587] duration metric: took 12.550509636s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:44:24.723865  313838 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:44:24.792708  313838 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 00:44:24.792748  313838 node_conditions.go:123] node cpu capacity is 8
	I1217 00:44:24.792767  313838 node_conditions.go:105] duration metric: took 68.896464ms to run NodePressure ...
	I1217 00:44:24.792781  313838 start.go:242] waiting for startup goroutines ...
	I1217 00:44:24.792791  313838 start.go:247] waiting for cluster config update ...
	I1217 00:44:24.792805  313838 start.go:256] writing updated cluster config ...
	I1217 00:44:24.793140  313838 ssh_runner.go:195] Run: rm -f paused
	I1217 00:44:24.797054  313838 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:44:24.882662  313838 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7p575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:24.887094  313838 pod_ready.go:94] pod "coredns-66bc5c9577-7p575" is "Ready"
	I1217 00:44:24.887120  313838 pod_ready.go:86] duration metric: took 4.427201ms for pod "coredns-66bc5c9577-7p575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:24.889066  313838 pod_ready.go:83] waiting for pod "etcd-kindnet-802249" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:24.892687  313838 pod_ready.go:94] pod "etcd-kindnet-802249" is "Ready"
	I1217 00:44:24.892704  313838 pod_ready.go:86] duration metric: took 3.618793ms for pod "etcd-kindnet-802249" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:24.894439  313838 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-802249" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:24.898224  313838 pod_ready.go:94] pod "kube-apiserver-kindnet-802249" is "Ready"
	I1217 00:44:24.898246  313838 pod_ready.go:86] duration metric: took 3.787233ms for pod "kube-apiserver-kindnet-802249" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:24.900076  313838 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-802249" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:25.200940  313838 pod_ready.go:94] pod "kube-controller-manager-kindnet-802249" is "Ready"
	I1217 00:44:25.200964  313838 pod_ready.go:86] duration metric: took 300.87247ms for pod "kube-controller-manager-kindnet-802249" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:25.401734  313838 pod_ready.go:83] waiting for pod "kube-proxy-zgfw2" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:25.801456  313838 pod_ready.go:94] pod "kube-proxy-zgfw2" is "Ready"
	I1217 00:44:25.801482  313838 pod_ready.go:86] duration metric: took 399.721184ms for pod "kube-proxy-zgfw2" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:26.002017  313838 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-802249" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:26.400925  313838 pod_ready.go:94] pod "kube-scheduler-kindnet-802249" is "Ready"
	I1217 00:44:26.400951  313838 pod_ready.go:86] duration metric: took 398.90712ms for pod "kube-scheduler-kindnet-802249" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:26.400963  313838 pod_ready.go:40] duration metric: took 1.603874856s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:44:26.446501  313838 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1217 00:44:26.448292  313838 out.go:179] * Done! kubectl is now configured to use "kindnet-802249" cluster and "default" namespace by default
	W1217 00:44:24.033295  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	W1217 00:44:26.532629  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	I1217 00:44:25.403093  319582 cli_runner.go:164] Run: docker exec calico-802249 stat /var/lib/dpkg/alternatives/iptables
	I1217 00:44:25.452233  319582 oci.go:144] the created container "calico-802249" has a running status.
	I1217 00:44:25.452265  319582 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/calico-802249/id_rsa...
	I1217 00:44:25.515430  319582 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-12816/.minikube/machines/calico-802249/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 00:44:25.544415  319582 cli_runner.go:164] Run: docker container inspect calico-802249 --format={{.State.Status}}
	I1217 00:44:25.562228  319582 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 00:44:25.562260  319582 kic_runner.go:114] Args: [docker exec --privileged calico-802249 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 00:44:25.633589  319582 cli_runner.go:164] Run: docker container inspect calico-802249 --format={{.State.Status}}
	I1217 00:44:25.661963  319582 machine.go:94] provisionDockerMachine start ...
	I1217 00:44:25.662130  319582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-802249
	I1217 00:44:25.687878  319582 main.go:143] libmachine: Using SSH client type: native
	I1217 00:44:25.688164  319582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1217 00:44:25.688182  319582 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:44:25.824334  319582 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-802249
	
	I1217 00:44:25.824361  319582 ubuntu.go:182] provisioning hostname "calico-802249"
	I1217 00:44:25.824424  319582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-802249
	I1217 00:44:25.843884  319582 main.go:143] libmachine: Using SSH client type: native
	I1217 00:44:25.844198  319582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1217 00:44:25.844222  319582 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-802249 && echo "calico-802249" | sudo tee /etc/hostname
	I1217 00:44:25.985487  319582 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-802249
	
	I1217 00:44:25.985583  319582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-802249
	I1217 00:44:26.005922  319582 main.go:143] libmachine: Using SSH client type: native
	I1217 00:44:26.006162  319582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1217 00:44:26.006187  319582 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-802249' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-802249/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-802249' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:44:26.134525  319582 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:44:26.134550  319582 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:44:26.134588  319582 ubuntu.go:190] setting up certificates
	I1217 00:44:26.134601  319582 provision.go:84] configureAuth start
	I1217 00:44:26.134651  319582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-802249
	I1217 00:44:26.152007  319582 provision.go:143] copyHostCerts
	I1217 00:44:26.152066  319582 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:44:26.152089  319582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:44:26.152170  319582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:44:26.152304  319582 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:44:26.152320  319582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:44:26.152362  319582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:44:26.152553  319582 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:44:26.152571  319582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:44:26.152618  319582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:44:26.152700  319582 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.calico-802249 san=[127.0.0.1 192.168.85.2 calico-802249 localhost minikube]
	I1217 00:44:26.227431  319582 provision.go:177] copyRemoteCerts
	I1217 00:44:26.227481  319582 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:44:26.227519  319582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-802249
	I1217 00:44:26.246084  319582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/calico-802249/id_rsa Username:docker}
	I1217 00:44:26.338029  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 00:44:26.356578  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:44:26.374546  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:44:26.391750  319582 provision.go:87] duration metric: took 257.128596ms to configureAuth
	I1217 00:44:26.391776  319582 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:44:26.391924  319582 config.go:182] Loaded profile config "calico-802249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:44:26.392031  319582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-802249
	I1217 00:44:26.411886  319582 main.go:143] libmachine: Using SSH client type: native
	I1217 00:44:26.412190  319582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1217 00:44:26.412211  319582 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:44:26.675003  319582 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:44:26.675027  319582 machine.go:97] duration metric: took 1.013041442s to provisionDockerMachine
	I1217 00:44:26.675040  319582 client.go:176] duration metric: took 6.094570662s to LocalClient.Create
	I1217 00:44:26.675059  319582 start.go:167] duration metric: took 6.094627011s to libmachine.API.Create "calico-802249"
	I1217 00:44:26.675067  319582 start.go:293] postStartSetup for "calico-802249" (driver="docker")
	I1217 00:44:26.675075  319582 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:44:26.675125  319582 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:44:26.675169  319582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-802249
	I1217 00:44:26.693165  319582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/calico-802249/id_rsa Username:docker}
	I1217 00:44:26.789173  319582 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:44:26.793016  319582 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:44:26.793050  319582 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:44:26.793064  319582 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:44:26.793130  319582 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:44:26.793220  319582 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:44:26.793313  319582 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:44:26.801278  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:44:26.820968  319582 start.go:296] duration metric: took 145.88909ms for postStartSetup
	I1217 00:44:26.821360  319582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-802249
	I1217 00:44:26.840363  319582 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/config.json ...
	I1217 00:44:26.840641  319582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:44:26.840681  319582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-802249
	I1217 00:44:26.858885  319582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/calico-802249/id_rsa Username:docker}
	I1217 00:44:26.947689  319582 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:44:26.951964  319582 start.go:128] duration metric: took 6.373877572s to createHost
	I1217 00:44:26.951987  319582 start.go:83] releasing machines lock for "calico-802249", held for 6.374010053s
	I1217 00:44:26.952079  319582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-802249
	I1217 00:44:26.970842  319582 ssh_runner.go:195] Run: cat /version.json
	I1217 00:44:26.970880  319582 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:44:26.970888  319582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-802249
	I1217 00:44:26.970951  319582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-802249
	I1217 00:44:26.989508  319582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/calico-802249/id_rsa Username:docker}
	I1217 00:44:26.990719  319582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/calico-802249/id_rsa Username:docker}
	I1217 00:44:27.135646  319582 ssh_runner.go:195] Run: systemctl --version
	I1217 00:44:27.142053  319582 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:44:27.174929  319582 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:44:27.179565  319582 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:44:27.179617  319582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:44:27.204370  319582 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 00:44:27.204394  319582 start.go:496] detecting cgroup driver to use...
	I1217 00:44:27.204426  319582 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:44:27.204482  319582 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:44:27.219820  319582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:44:27.231592  319582 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:44:27.231646  319582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:44:27.247220  319582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:44:27.264961  319582 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:44:27.345536  319582 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:44:27.433598  319582 docker.go:234] disabling docker service ...
	I1217 00:44:27.433662  319582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:44:27.451665  319582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:44:27.463965  319582 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:44:27.551315  319582 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:44:27.647267  319582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:44:27.660430  319582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:44:27.676343  319582 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:44:27.676402  319582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:44:27.687080  319582 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:44:27.687131  319582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:44:27.695977  319582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:44:27.705954  319582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:44:27.715684  319582 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:44:27.723901  319582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:44:27.733497  319582 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:44:27.748094  319582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:44:27.756711  319582 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:44:27.763711  319582 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:44:27.770663  319582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:44:27.860623  319582 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:44:28.015182  319582 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:44:28.015257  319582 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:44:28.019859  319582 start.go:564] Will wait 60s for crictl version
	I1217 00:44:28.019911  319582 ssh_runner.go:195] Run: which crictl
	I1217 00:44:28.024129  319582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:44:28.051129  319582 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:44:28.051215  319582 ssh_runner.go:195] Run: crio --version
	I1217 00:44:28.081251  319582 ssh_runner.go:195] Run: crio --version
	I1217 00:44:28.109127  319582 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 00:44:28.110183  319582 cli_runner.go:164] Run: docker network inspect calico-802249 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:44:28.127820  319582 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 00:44:28.131677  319582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:44:28.142923  319582 kubeadm.go:884] updating cluster {Name:calico-802249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-802249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:44:28.143075  319582 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:44:28.143137  319582 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:44:28.174524  319582 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:44:28.174547  319582 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:44:28.174596  319582 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:44:28.200982  319582 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:44:28.201016  319582 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:44:28.201026  319582 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1217 00:44:28.201135  319582 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-802249 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:calico-802249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1217 00:44:28.201216  319582 ssh_runner.go:195] Run: crio config
	I1217 00:44:28.257059  319582 cni.go:84] Creating CNI manager for "calico"
	I1217 00:44:28.257098  319582 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:44:28.257119  319582 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-802249 NodeName:calico-802249 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:44:28.257236  319582 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-802249"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:44:28.257308  319582 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 00:44:28.265585  319582 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:44:28.265636  319582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:44:28.273048  319582 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 00:44:28.285948  319582 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 00:44:28.301433  319582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1217 00:44:28.314069  319582 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:44:28.317541  319582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:44:28.327599  319582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:44:28.417717  319582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:44:28.438003  319582 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249 for IP: 192.168.85.2
	I1217 00:44:28.438023  319582 certs.go:195] generating shared ca certs ...
	I1217 00:44:28.438046  319582 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:44:28.438206  319582 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:44:28.438262  319582 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:44:28.438275  319582 certs.go:257] generating profile certs ...
	I1217 00:44:28.438352  319582 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/client.key
	I1217 00:44:28.438374  319582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/client.crt with IP's: []
	I1217 00:44:28.594061  319582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/client.crt ...
	I1217 00:44:28.594090  319582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/client.crt: {Name:mk0f33fd9eb0ec9a39ed2527660f28f5b980216e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:44:28.594268  319582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/client.key ...
	I1217 00:44:28.594285  319582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/client.key: {Name:mk2489c51abf85e78b6933083c4e8e1afb653b56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:44:28.594399  319582 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.key.3b2fb64b
	I1217 00:44:28.594419  319582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.crt.3b2fb64b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1217 00:44:28.665654  319582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.crt.3b2fb64b ...
	I1217 00:44:28.665677  319582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.crt.3b2fb64b: {Name:mk9f43cef323fb9e72972c64abf288ee1463490b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:44:28.665806  319582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.key.3b2fb64b ...
	I1217 00:44:28.665819  319582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.key.3b2fb64b: {Name:mk991dd896eccda5a6442518f7641b182c3da38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:44:28.665900  319582 certs.go:382] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.crt.3b2fb64b -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.crt
	I1217 00:44:28.665982  319582 certs.go:386] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.key.3b2fb64b -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.key
	I1217 00:44:28.666072  319582 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/proxy-client.key
	I1217 00:44:28.666090  319582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/proxy-client.crt with IP's: []
	I1217 00:44:28.772114  319582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/proxy-client.crt ...
	I1217 00:44:28.772142  319582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/proxy-client.crt: {Name:mk0ac25c98b5a6c77bc00c7d2d2f81d3b655c72c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:44:28.772329  319582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/proxy-client.key ...
	I1217 00:44:28.772347  319582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/proxy-client.key: {Name:mk3c305449f15905a8ec0a69180c43478a6fe6a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:44:28.772566  319582 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:44:28.772609  319582 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:44:28.772618  319582 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:44:28.772650  319582 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:44:28.772678  319582 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:44:28.772705  319582 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:44:28.772754  319582 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:44:28.773309  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:44:28.791255  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:44:28.811072  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:44:28.830941  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:44:28.849912  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 00:44:28.867507  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:44:28.887060  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:44:28.904705  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:44:28.921309  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:44:28.940803  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:44:28.957611  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:44:28.974238  319582 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:44:28.985740  319582 ssh_runner.go:195] Run: openssl version
	I1217 00:44:28.991479  319582 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:44:28.998478  319582 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:44:29.005852  319582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:44:29.009464  319582 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:44:29.009524  319582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:44:29.045443  319582 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:44:29.053385  319582 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 00:44:29.060469  319582 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16354.pem
	I1217 00:44:29.067439  319582 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16354.pem /etc/ssl/certs/16354.pem
	I1217 00:44:29.074260  319582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16354.pem
	I1217 00:44:29.077668  319582 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:13 /usr/share/ca-certificates/16354.pem
	I1217 00:44:29.077714  319582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16354.pem
	I1217 00:44:29.113345  319582 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:44:29.120368  319582 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16354.pem /etc/ssl/certs/51391683.0
	I1217 00:44:29.127450  319582 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:44:29.134913  319582 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:44:29.142193  319582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:44:29.145713  319582 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:44:29.145761  319582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	I1217 00:44:29.180919  319582 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:44:29.188000  319582 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/163542.pem /etc/ssl/certs/3ec20f2e.0
	I1217 00:44:29.194920  319582 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:44:29.198258  319582 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 00:44:29.198310  319582 kubeadm.go:401] StartCluster: {Name:calico-802249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-802249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:44:29.198375  319582 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:44:29.198406  319582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:44:29.224212  319582 cri.go:89] found id: ""
	I1217 00:44:29.224270  319582 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:44:29.232058  319582 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:44:29.239714  319582 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:44:29.239760  319582 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:44:29.247085  319582 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:44:29.247104  319582 kubeadm.go:158] found existing configuration files:
	
	I1217 00:44:29.247154  319582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 00:44:29.254280  319582 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:44:29.254326  319582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:44:29.261387  319582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 00:44:29.268820  319582 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:44:29.268858  319582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:44:29.276159  319582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 00:44:29.283902  319582 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:44:29.283958  319582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:44:29.291708  319582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 00:44:29.299893  319582 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:44:29.299945  319582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:44:29.307435  319582 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:44:29.346623  319582 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1217 00:44:29.346701  319582 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:44:29.368094  319582 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:44:29.368197  319582 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 00:44:29.368252  319582 kubeadm.go:319] OS: Linux
	I1217 00:44:29.368322  319582 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:44:29.368388  319582 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:44:29.368477  319582 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:44:29.368547  319582 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:44:29.368627  319582 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:44:29.368786  319582 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:44:29.368873  319582 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:44:29.368944  319582 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 00:44:29.438236  319582 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:44:29.438367  319582 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:44:29.438493  319582 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:44:29.446354  319582 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Dec 17 00:43:49 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:43:49.390316678Z" level=info msg="Started container" PID=1743 containerID=f4fe7e2efb6c9874e3b2f9dacc235373b8e72b902ea1ddeb7fd1caa95111c574 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr/dashboard-metrics-scraper id=b4da63f3-f49e-4edb-a531-a130b5cef0f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3cd8ddbd09d24f2de4ff24d7acc1878a1f5db8baa6841072707f41b4ac6bf783
	Dec 17 00:43:50 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:43:50.326671524Z" level=info msg="Removing container: 6a4901560df386b43625704e1859edc7fbe21d9c08ece38745b6655e35020604" id=f37be39e-1f1d-4ea2-8ac5-3334e639742f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:43:50 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:43:50.337163789Z" level=info msg="Removed container 6a4901560df386b43625704e1859edc7fbe21d9c08ece38745b6655e35020604: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr/dashboard-metrics-scraper" id=f37be39e-1f1d-4ea2-8ac5-3334e639742f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:44:08 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:08.370738256Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0b71ac28-61bd-44c6-9227-b9df7bf02c72 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:44:08 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:08.371668142Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7873d776-f549-4b0d-89e1-e021109af006 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:44:08 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:08.372682328Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=0482844c-d87a-4538-ac64-881ff7d5860b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:44:08 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:08.372818746Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:44:08 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:08.377561777Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:44:08 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:08.377689947Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7bf7c7b2e63d17d9205cb2138cda310c280d8e5ccbb472bf9c96acd6b3201272/merged/etc/passwd: no such file or directory"
	Dec 17 00:44:08 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:08.377710891Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7bf7c7b2e63d17d9205cb2138cda310c280d8e5ccbb472bf9c96acd6b3201272/merged/etc/group: no such file or directory"
	Dec 17 00:44:08 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:08.377912122Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:44:08 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:08.430244555Z" level=info msg="Created container 29dcc2e0fca01e5acc47fd9e7b42b73755a799de4a843cc7448f1cf3d24c1370: kube-system/storage-provisioner/storage-provisioner" id=0482844c-d87a-4538-ac64-881ff7d5860b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:44:08 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:08.430875327Z" level=info msg="Starting container: 29dcc2e0fca01e5acc47fd9e7b42b73755a799de4a843cc7448f1cf3d24c1370" id=0a755e85-d7b3-472c-9730-d2d4ed609112 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:44:08 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:08.432653105Z" level=info msg="Started container" PID=1761 containerID=29dcc2e0fca01e5acc47fd9e7b42b73755a799de4a843cc7448f1cf3d24c1370 description=kube-system/storage-provisioner/storage-provisioner id=0a755e85-d7b3-472c-9730-d2d4ed609112 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c9475c845c680cb0fbd371608c513dfc36cff9c0d8f39335db188ead17a1dd4a
	Dec 17 00:44:12 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:12.262304069Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=09578481-1742-4aa5-baff-86a940a7efd9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:44:12 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:12.263484023Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=25e9fe98-ac0a-4e95-ba7e-a2083fa7eb89 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:44:12 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:12.265095166Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr/dashboard-metrics-scraper" id=902c95a2-14db-4fee-9cbd-93e054ab7b6d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:44:12 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:12.265307275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:44:12 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:12.271630407Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:44:12 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:12.272361979Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:44:12 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:12.298697486Z" level=info msg="Created container d2a2a6abdc96c42c27ab0c3e8b49c402a202b687de42012d4e22faf078a53746: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr/dashboard-metrics-scraper" id=902c95a2-14db-4fee-9cbd-93e054ab7b6d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:44:12 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:12.299939127Z" level=info msg="Starting container: d2a2a6abdc96c42c27ab0c3e8b49c402a202b687de42012d4e22faf078a53746" id=45dd9baa-cbeb-413a-b471-cf9461351346 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:44:12 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:12.302297717Z" level=info msg="Started container" PID=1777 containerID=d2a2a6abdc96c42c27ab0c3e8b49c402a202b687de42012d4e22faf078a53746 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr/dashboard-metrics-scraper id=45dd9baa-cbeb-413a-b471-cf9461351346 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3cd8ddbd09d24f2de4ff24d7acc1878a1f5db8baa6841072707f41b4ac6bf783
	Dec 17 00:44:12 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:12.388535374Z" level=info msg="Removing container: f4fe7e2efb6c9874e3b2f9dacc235373b8e72b902ea1ddeb7fd1caa95111c574" id=454acaae-017c-47d3-a1e2-e6ad5365a98c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:44:12 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:12.402153655Z" level=info msg="Removed container f4fe7e2efb6c9874e3b2f9dacc235373b8e72b902ea1ddeb7fd1caa95111c574: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr/dashboard-metrics-scraper" id=454acaae-017c-47d3-a1e2-e6ad5365a98c name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	d2a2a6abdc96c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   3cd8ddbd09d24       dashboard-metrics-scraper-6ffb444bf9-pwxgr             kubernetes-dashboard
	29dcc2e0fca01       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   c9475c845c680       storage-provisioner                                    kube-system
	554f0df62e1c2       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   fa4f371ed44d2       kubernetes-dashboard-855c9754f9-wnwc6                  kubernetes-dashboard
	b82b299f948d7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   4159bb6b9cc1a       coredns-66bc5c9577-v76f4                               kube-system
	1a5294d009027       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   1793c737adaeb       busybox                                                default
	275d3d03f2346       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   c9475c845c680       storage-provisioner                                    kube-system
	bbca296c30f3d       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           52 seconds ago      Running             kube-proxy                  0                   373424454badc       kube-proxy-prlkw                                       kube-system
	1b6d441ac73c0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   4cb9b78188fe4       kindnet-hxhbf                                          kube-system
	2a7b291de067a       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           55 seconds ago      Running             kube-apiserver              0                   4b52776844a99       kube-apiserver-default-k8s-diff-port-414413            kube-system
	4dcc77a289bba       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           55 seconds ago      Running             etcd                        0                   b9e3daed73cb4       etcd-default-k8s-diff-port-414413                      kube-system
	ba3df04c6b3fe       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           55 seconds ago      Running             kube-scheduler              0                   0e39de61468ce       kube-scheduler-default-k8s-diff-port-414413            kube-system
	eecadcae34c36       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           55 seconds ago      Running             kube-controller-manager     0                   9e0781fc25868       kube-controller-manager-default-k8s-diff-port-414413   kube-system
	
	
	==> coredns [b82b299f948d717658d6977755447250d679af51d1b6071b37f467e8810d95bf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34951 - 9759 "HINFO IN 2097766802274907779.5305395646306382892. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.492289075s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-414413
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-414413
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=default-k8s-diff-port-414413
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_42_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:42:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-414413
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:44:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:44:27 +0000   Wed, 17 Dec 2025 00:42:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:44:27 +0000   Wed, 17 Dec 2025 00:42:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:44:27 +0000   Wed, 17 Dec 2025 00:42:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 00:44:27 +0000   Wed, 17 Dec 2025 00:42:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-414413
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                30e488d3-49b2-4dae-91a3-bdf1e8cb0774
	  Boot ID:                    0e9cedc6-c46e-4354-b3d2-9272a8b33ae5
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-v76f4                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-default-k8s-diff-port-414413                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-hxhbf                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-default-k8s-diff-port-414413             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-414413    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-prlkw                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-default-k8s-diff-port-414413             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-pwxgr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wnwc6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node default-k8s-diff-port-414413 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node default-k8s-diff-port-414413 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node default-k8s-diff-port-414413 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node default-k8s-diff-port-414413 event: Registered Node default-k8s-diff-port-414413 in Controller
	  Normal  NodeReady                98s                kubelet          Node default-k8s-diff-port-414413 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-414413 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-414413 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node default-k8s-diff-port-414413 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node default-k8s-diff-port-414413 event: Registered Node default-k8s-diff-port-414413 in Controller
	
	
	==> dmesg <==
	[  +0.089382] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024236] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.864694] kauditd_printk_skb: 47 callbacks suppressed
	[Dec17 00:07] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.006904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +2.048755] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +4.030595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +8.447143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[ +16.382404] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000015] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[Dec17 00:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	
	
	==> etcd [4dcc77a289bba808ececc2d4f0efa70e966e843b2057d6de5ad0054d0be435c8] <==
	{"level":"warn","ts":"2025-12-17T00:43:35.945210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:35.954650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:35.969323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:35.975906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:35.983544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:35.993400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.010807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.012840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.017117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.024680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.031644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.038156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.044403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.053941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.061944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.069324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.076667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.083136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.090217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.096816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.116463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.124649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.132667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.185443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56150","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T00:43:47.269702Z","caller":"traceutil/trace.go:172","msg":"trace[76811734] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"124.176168ms","start":"2025-12-17T00:43:47.145505Z","end":"2025-12-17T00:43:47.269682Z","steps":["trace[76811734] 'process raft request'  (duration: 124.057881ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:44:30 up  1:27,  0 user,  load average: 2.73, 2.83, 2.01
	Linux default-k8s-diff-port-414413 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1b6d441ac73c0906999e2b074e7fb8e741006a82fa543a72336ae290aef62cf4] <==
	I1217 00:43:37.862276       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 00:43:37.862602       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1217 00:43:37.862767       1 main.go:148] setting mtu 1500 for CNI 
	I1217 00:43:37.862787       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 00:43:37.862815       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T00:43:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 00:43:38.062320       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 00:43:38.062347       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 00:43:38.062361       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 00:43:38.129736       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 00:43:38.530040       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 00:43:38.530067       1 metrics.go:72] Registering metrics
	I1217 00:43:38.530136       1 controller.go:711] "Syncing nftables rules"
	I1217 00:43:48.062155       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 00:43:48.062197       1 main.go:301] handling current node
	I1217 00:43:58.064458       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 00:43:58.064571       1 main.go:301] handling current node
	I1217 00:44:08.063254       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 00:44:08.063293       1 main.go:301] handling current node
	I1217 00:44:18.062743       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 00:44:18.062799       1 main.go:301] handling current node
	I1217 00:44:28.062857       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 00:44:28.062906       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2a7b291de067a5044f406eaa0104c52261424e3730e6c2e4d38864b41943eddd] <==
	I1217 00:43:36.689090       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 00:43:36.689097       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 00:43:36.689103       1 cache.go:39] Caches are synced for autoregister controller
	I1217 00:43:36.689135       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 00:43:36.689170       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 00:43:36.689200       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 00:43:36.689208       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 00:43:36.689228       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 00:43:36.689783       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 00:43:36.700181       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 00:43:36.705127       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 00:43:36.715910       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1217 00:43:36.716377       1 policy_source.go:240] refreshing policies
	I1217 00:43:36.727222       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 00:43:36.994093       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 00:43:37.023797       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 00:43:37.043847       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 00:43:37.050695       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 00:43:37.058095       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 00:43:37.086463       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.127.68"}
	I1217 00:43:37.096116       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.63.172"}
	I1217 00:43:37.594952       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 00:43:40.414329       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 00:43:40.463285       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 00:43:40.564772       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [eecadcae34c3698337c66c6d6dbab2066993e3216b64d194344407552bc449b5] <==
	I1217 00:43:40.010114       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 00:43:40.010160       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 00:43:40.010205       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 00:43:40.010220       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 00:43:40.010230       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 00:43:40.010562       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 00:43:40.011793       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1217 00:43:40.013767       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 00:43:40.013782       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1217 00:43:40.013878       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1217 00:43:40.013937       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 00:43:40.013976       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-414413"
	I1217 00:43:40.014045       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1217 00:43:40.015237       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 00:43:40.016425       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1217 00:43:40.016508       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1217 00:43:40.016549       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 00:43:40.016555       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 00:43:40.016561       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 00:43:40.017802       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 00:43:40.020319       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 00:43:40.020423       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 00:43:40.021637       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 00:43:40.023825       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 00:43:40.031216       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [bbca296c30f3d3f0cca453021716cd6a26728333310fb6dfdeb35c44a6832375] <==
	I1217 00:43:37.652279       1 server_linux.go:53] "Using iptables proxy"
	I1217 00:43:37.717326       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 00:43:37.817895       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 00:43:37.817936       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1217 00:43:37.818057       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 00:43:37.839203       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 00:43:37.839278       1 server_linux.go:132] "Using iptables Proxier"
	I1217 00:43:37.845427       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 00:43:37.845977       1 server.go:527] "Version info" version="v1.34.2"
	I1217 00:43:37.846049       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:43:37.847667       1 config.go:200] "Starting service config controller"
	I1217 00:43:37.847704       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 00:43:37.847712       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 00:43:37.847724       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 00:43:37.847748       1 config.go:106] "Starting endpoint slice config controller"
	I1217 00:43:37.847766       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 00:43:37.847787       1 config.go:309] "Starting node config controller"
	I1217 00:43:37.847797       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 00:43:37.847804       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 00:43:37.948506       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 00:43:37.948538       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 00:43:37.948546       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ba3df04c6b3feaf2f234a1a9b098c1269d844cdbaf6531304d6ddd40b10820d5] <==
	I1217 00:43:35.116203       1 serving.go:386] Generated self-signed cert in-memory
	W1217 00:43:36.609337       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 00:43:36.609400       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 00:43:36.609413       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 00:43:36.609422       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 00:43:36.668390       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1217 00:43:36.669160       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:43:36.676800       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 00:43:36.677107       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:43:36.677131       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:43:36.677153       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 00:43:36.777514       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 00:43:40 default-k8s-diff-port-414413 kubelet[725]: I1217 00:43:40.742380     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9qln\" (UniqueName: \"kubernetes.io/projected/b3449637-778d-417a-b505-434f3216b394-kube-api-access-v9qln\") pod \"dashboard-metrics-scraper-6ffb444bf9-pwxgr\" (UID: \"b3449637-778d-417a-b505-434f3216b394\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr"
	Dec 17 00:43:40 default-k8s-diff-port-414413 kubelet[725]: I1217 00:43:40.742456     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/bd229193-f29a-44ac-a723-c842b5034e75-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-wnwc6\" (UID: \"bd229193-f29a-44ac-a723-c842b5034e75\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wnwc6"
	Dec 17 00:43:40 default-k8s-diff-port-414413 kubelet[725]: I1217 00:43:40.742486     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5f9r\" (UniqueName: \"kubernetes.io/projected/bd229193-f29a-44ac-a723-c842b5034e75-kube-api-access-b5f9r\") pod \"kubernetes-dashboard-855c9754f9-wnwc6\" (UID: \"bd229193-f29a-44ac-a723-c842b5034e75\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wnwc6"
	Dec 17 00:43:40 default-k8s-diff-port-414413 kubelet[725]: I1217 00:43:40.742507     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b3449637-778d-417a-b505-434f3216b394-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-pwxgr\" (UID: \"b3449637-778d-417a-b505-434f3216b394\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr"
	Dec 17 00:43:44 default-k8s-diff-port-414413 kubelet[725]: I1217 00:43:44.395694     725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 17 00:43:46 default-k8s-diff-port-414413 kubelet[725]: I1217 00:43:46.733370     725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wnwc6" podStartSLOduration=2.255547888 podStartE2EDuration="6.733347641s" podCreationTimestamp="2025-12-17 00:43:40 +0000 UTC" firstStartedPulling="2025-12-17 00:43:40.957576642 +0000 UTC m=+6.846042370" lastFinishedPulling="2025-12-17 00:43:45.435376393 +0000 UTC m=+11.323842123" observedRunningTime="2025-12-17 00:43:46.331555585 +0000 UTC m=+12.220021330" watchObservedRunningTime="2025-12-17 00:43:46.733347641 +0000 UTC m=+12.621813375"
	Dec 17 00:43:49 default-k8s-diff-port-414413 kubelet[725]: I1217 00:43:49.321274     725 scope.go:117] "RemoveContainer" containerID="6a4901560df386b43625704e1859edc7fbe21d9c08ece38745b6655e35020604"
	Dec 17 00:43:50 default-k8s-diff-port-414413 kubelet[725]: I1217 00:43:50.325310     725 scope.go:117] "RemoveContainer" containerID="6a4901560df386b43625704e1859edc7fbe21d9c08ece38745b6655e35020604"
	Dec 17 00:43:50 default-k8s-diff-port-414413 kubelet[725]: I1217 00:43:50.325483     725 scope.go:117] "RemoveContainer" containerID="f4fe7e2efb6c9874e3b2f9dacc235373b8e72b902ea1ddeb7fd1caa95111c574"
	Dec 17 00:43:50 default-k8s-diff-port-414413 kubelet[725]: E1217 00:43:50.325696     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pwxgr_kubernetes-dashboard(b3449637-778d-417a-b505-434f3216b394)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr" podUID="b3449637-778d-417a-b505-434f3216b394"
	Dec 17 00:43:51 default-k8s-diff-port-414413 kubelet[725]: I1217 00:43:51.329697     725 scope.go:117] "RemoveContainer" containerID="f4fe7e2efb6c9874e3b2f9dacc235373b8e72b902ea1ddeb7fd1caa95111c574"
	Dec 17 00:43:51 default-k8s-diff-port-414413 kubelet[725]: E1217 00:43:51.329886     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pwxgr_kubernetes-dashboard(b3449637-778d-417a-b505-434f3216b394)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr" podUID="b3449637-778d-417a-b505-434f3216b394"
	Dec 17 00:43:57 default-k8s-diff-port-414413 kubelet[725]: I1217 00:43:57.746137     725 scope.go:117] "RemoveContainer" containerID="f4fe7e2efb6c9874e3b2f9dacc235373b8e72b902ea1ddeb7fd1caa95111c574"
	Dec 17 00:43:57 default-k8s-diff-port-414413 kubelet[725]: E1217 00:43:57.746375     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pwxgr_kubernetes-dashboard(b3449637-778d-417a-b505-434f3216b394)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr" podUID="b3449637-778d-417a-b505-434f3216b394"
	Dec 17 00:44:08 default-k8s-diff-port-414413 kubelet[725]: I1217 00:44:08.370314     725 scope.go:117] "RemoveContainer" containerID="275d3d03f2346fc781571f2f61dc5d70168875e4ee6e2e5783f3893a19e24e67"
	Dec 17 00:44:12 default-k8s-diff-port-414413 kubelet[725]: I1217 00:44:12.260012     725 scope.go:117] "RemoveContainer" containerID="f4fe7e2efb6c9874e3b2f9dacc235373b8e72b902ea1ddeb7fd1caa95111c574"
	Dec 17 00:44:12 default-k8s-diff-port-414413 kubelet[725]: I1217 00:44:12.385958     725 scope.go:117] "RemoveContainer" containerID="f4fe7e2efb6c9874e3b2f9dacc235373b8e72b902ea1ddeb7fd1caa95111c574"
	Dec 17 00:44:12 default-k8s-diff-port-414413 kubelet[725]: I1217 00:44:12.386484     725 scope.go:117] "RemoveContainer" containerID="d2a2a6abdc96c42c27ab0c3e8b49c402a202b687de42012d4e22faf078a53746"
	Dec 17 00:44:12 default-k8s-diff-port-414413 kubelet[725]: E1217 00:44:12.386690     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pwxgr_kubernetes-dashboard(b3449637-778d-417a-b505-434f3216b394)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr" podUID="b3449637-778d-417a-b505-434f3216b394"
	Dec 17 00:44:17 default-k8s-diff-port-414413 kubelet[725]: I1217 00:44:17.746350     725 scope.go:117] "RemoveContainer" containerID="d2a2a6abdc96c42c27ab0c3e8b49c402a202b687de42012d4e22faf078a53746"
	Dec 17 00:44:17 default-k8s-diff-port-414413 kubelet[725]: E1217 00:44:17.746540     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pwxgr_kubernetes-dashboard(b3449637-778d-417a-b505-434f3216b394)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr" podUID="b3449637-778d-417a-b505-434f3216b394"
	Dec 17 00:44:28 default-k8s-diff-port-414413 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 00:44:28 default-k8s-diff-port-414413 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 00:44:28 default-k8s-diff-port-414413 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:44:28 default-k8s-diff-port-414413 systemd[1]: kubelet.service: Consumed 1.733s CPU time.
	
	
	==> kubernetes-dashboard [554f0df62e1c2a39c6dcfbc1c0ee65889b3ab428dc9ed21a3ca89b258910f564] <==
	2025/12/17 00:43:45 Starting overwatch
	2025/12/17 00:43:45 Using namespace: kubernetes-dashboard
	2025/12/17 00:43:45 Using in-cluster config to connect to apiserver
	2025/12/17 00:43:45 Using secret token for csrf signing
	2025/12/17 00:43:45 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 00:43:45 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 00:43:45 Successful initial request to the apiserver, version: v1.34.2
	2025/12/17 00:43:45 Generating JWE encryption key
	2025/12/17 00:43:45 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 00:43:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 00:43:45 Initializing JWE encryption key from synchronized object
	2025/12/17 00:43:45 Creating in-cluster Sidecar client
	2025/12/17 00:43:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 00:43:45 Serving insecurely on HTTP port: 9090
	2025/12/17 00:44:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [275d3d03f2346fc781571f2f61dc5d70168875e4ee6e2e5783f3893a19e24e67] <==
	I1217 00:43:37.619185       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 00:44:07.621558       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [29dcc2e0fca01e5acc47fd9e7b42b73755a799de4a843cc7448f1cf3d24c1370] <==
	I1217 00:44:08.444564       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 00:44:08.451081       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 00:44:08.451125       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 00:44:08.453171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:11.908671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:16.168708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:19.767762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:22.822195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:25.844326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:25.848700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 00:44:25.848883       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 00:44:25.849051       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9cb29e54-a67b-4f6f-a2d9-d357efab670a", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-414413_7f1e76f4-69de-441e-a1bb-f1aed0cd50bd became leader
	I1217 00:44:25.849104       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-414413_7f1e76f4-69de-441e-a1bb-f1aed0cd50bd!
	W1217 00:44:25.851016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:25.855207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 00:44:25.949358       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-414413_7f1e76f4-69de-441e-a1bb-f1aed0cd50bd!
	W1217 00:44:27.858898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:27.863123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:29.866914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:29.871332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-414413 -n default-k8s-diff-port-414413
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-414413 -n default-k8s-diff-port-414413: exit status 2 (318.452775ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-414413 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-414413
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-414413:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "32e520445c9ef469b69a7cfa94fa07b2c047bc072eab1f9bd789716ea62b2b17",
	        "Created": "2025-12-17T00:42:18.411894947Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 306548,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:43:24.886655377Z",
	            "FinishedAt": "2025-12-17T00:43:23.963069349Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/32e520445c9ef469b69a7cfa94fa07b2c047bc072eab1f9bd789716ea62b2b17/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/32e520445c9ef469b69a7cfa94fa07b2c047bc072eab1f9bd789716ea62b2b17/hostname",
	        "HostsPath": "/var/lib/docker/containers/32e520445c9ef469b69a7cfa94fa07b2c047bc072eab1f9bd789716ea62b2b17/hosts",
	        "LogPath": "/var/lib/docker/containers/32e520445c9ef469b69a7cfa94fa07b2c047bc072eab1f9bd789716ea62b2b17/32e520445c9ef469b69a7cfa94fa07b2c047bc072eab1f9bd789716ea62b2b17-json.log",
	        "Name": "/default-k8s-diff-port-414413",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-414413:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-414413",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "32e520445c9ef469b69a7cfa94fa07b2c047bc072eab1f9bd789716ea62b2b17",
	                "LowerDir": "/var/lib/docker/overlay2/f63ae4354f75340680ea6735a9f2526da1a4c2e021a8a8e10a3b649ecbc014e0-init/diff:/var/lib/docker/overlay2/594b812fd6d8db89dab322ea9e00d43dd555e9709fb5e6953e3873cce717392c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f63ae4354f75340680ea6735a9f2526da1a4c2e021a8a8e10a3b649ecbc014e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f63ae4354f75340680ea6735a9f2526da1a4c2e021a8a8e10a3b649ecbc014e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f63ae4354f75340680ea6735a9f2526da1a4c2e021a8a8e10a3b649ecbc014e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-414413",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-414413/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-414413",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-414413",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-414413",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "09c00b839672fde06dfd4934d198cef1659e7eb358cb0a9f8913ae9ff66d80c2",
	            "SandboxKey": "/var/run/docker/netns/09c00b839672",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-414413": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a57026acfc125e7890d2c5444987c6f9f2a024f5d99a4bf5d6821c92ba08cc07",
	                    "EndpointID": "044fbcb88ff5df9641dce86e342b82fd42526e568ac0ce22405f7b85d5d3ba97",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "32:be:ad:82:22:b4",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-414413",
	                        "32e520445c9e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-414413 -n default-k8s-diff-port-414413
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-414413 -n default-k8s-diff-port-414413: exit status 2 (325.012208ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-414413 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-414413 logs -n 25: (1.103819015s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ stop    │ -p newest-cni-653717 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ stop    │ -p default-k8s-diff-port-414413 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable dashboard -p newest-cni-653717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p newest-cni-653717 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-153232 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p embed-certs-153232 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:44 UTC │
	│ image   │ newest-cni-653717 image list --format=json                                                                                                                                                                                                           │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ pause   │ -p newest-cni-653717 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-414413 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p default-k8s-diff-port-414413 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:44 UTC │
	│ delete  │ -p newest-cni-653717                                                                                                                                                                                                                                 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ delete  │ -p newest-cni-653717                                                                                                                                                                                                                                 │ newest-cni-653717            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p auto-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-802249                  │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ image   │ no-preload-864613 image list --format=json                                                                                                                                                                                                           │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ pause   │ -p no-preload-864613 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │                     │
	│ delete  │ -p no-preload-864613                                                                                                                                                                                                                                 │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ delete  │ -p no-preload-864613                                                                                                                                                                                                                                 │ no-preload-864613            │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:43 UTC │
	│ start   │ -p kindnet-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                             │ kindnet-802249               │ jenkins │ v1.37.0 │ 17 Dec 25 00:43 UTC │ 17 Dec 25 00:44 UTC │
	│ image   │ embed-certs-153232 image list --format=json                                                                                                                                                                                                          │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:44 UTC │ 17 Dec 25 00:44 UTC │
	│ pause   │ -p embed-certs-153232 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:44 UTC │                     │
	│ delete  │ -p embed-certs-153232                                                                                                                                                                                                                                │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:44 UTC │ 17 Dec 25 00:44 UTC │
	│ delete  │ -p embed-certs-153232                                                                                                                                                                                                                                │ embed-certs-153232           │ jenkins │ v1.37.0 │ 17 Dec 25 00:44 UTC │ 17 Dec 25 00:44 UTC │
	│ start   │ -p calico-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                                                                                                               │ calico-802249                │ jenkins │ v1.37.0 │ 17 Dec 25 00:44 UTC │                     │
	│ image   │ default-k8s-diff-port-414413 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:44 UTC │ 17 Dec 25 00:44 UTC │
	│ pause   │ -p default-k8s-diff-port-414413 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-414413 │ jenkins │ v1.37.0 │ 17 Dec 25 00:44 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:44:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:44:20.394476  319582 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:44:20.394713  319582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:44:20.394721  319582 out.go:374] Setting ErrFile to fd 2...
	I1217 00:44:20.394726  319582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:44:20.394900  319582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:44:20.395351  319582 out.go:368] Setting JSON to false
	I1217 00:44:20.396505  319582 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5210,"bootTime":1765927050,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:44:20.396553  319582 start.go:143] virtualization: kvm guest
	I1217 00:44:20.398507  319582 out.go:179] * [calico-802249] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:44:20.399696  319582 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:44:20.399711  319582 notify.go:221] Checking for updates...
	I1217 00:44:20.402588  319582 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:44:20.404189  319582 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:44:20.405312  319582 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:44:20.406298  319582 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:44:20.407348  319582 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:44:20.408721  319582 config.go:182] Loaded profile config "auto-802249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:44:20.408841  319582 config.go:182] Loaded profile config "default-k8s-diff-port-414413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:44:20.408941  319582 config.go:182] Loaded profile config "kindnet-802249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:44:20.409097  319582 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:44:20.432664  319582 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:44:20.432818  319582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:44:20.488291  319582 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-17 00:44:20.478115459 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:44:20.488404  319582 docker.go:319] overlay module found
	I1217 00:44:20.489986  319582 out.go:179] * Using the docker driver based on user configuration
	I1217 00:44:20.491187  319582 start.go:309] selected driver: docker
	I1217 00:44:20.491205  319582 start.go:927] validating driver "docker" against <nil>
	I1217 00:44:20.491219  319582 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:44:20.492102  319582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:44:20.549295  319582 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-17 00:44:20.539290454 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:44:20.549435  319582 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:44:20.549704  319582 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:44:20.551178  319582 out.go:179] * Using Docker driver with root privileges
	I1217 00:44:20.552397  319582 cni.go:84] Creating CNI manager for "calico"
	I1217 00:44:20.552420  319582 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1217 00:44:20.552502  319582 start.go:353] cluster config:
	{Name:calico-802249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-802249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:44:20.553784  319582 out.go:179] * Starting "calico-802249" primary control-plane node in "calico-802249" cluster
	I1217 00:44:20.554852  319582 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:44:20.556015  319582 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:44:20.557065  319582 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:44:20.557097  319582 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1217 00:44:20.557108  319582 cache.go:65] Caching tarball of preloaded images
	I1217 00:44:20.557164  319582 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:44:20.557207  319582 preload.go:238] Found /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 00:44:20.557222  319582 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 00:44:20.557323  319582 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/config.json ...
	I1217 00:44:20.557355  319582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/config.json: {Name:mk6d81e1e5e976b995c9f4a77bc824df3c821922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:44:20.577810  319582 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:44:20.577826  319582 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:44:20.577843  319582 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:44:20.577880  319582 start.go:360] acquireMachinesLock for calico-802249: {Name:mk66f9af13fbe38f7686efc64dcebf8f2643e35c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:44:20.577967  319582 start.go:364] duration metric: took 73.471µs to acquireMachinesLock for "calico-802249"
	I1217 00:44:20.578007  319582 start.go:93] Provisioning new machine with config: &{Name:calico-802249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-802249 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:44:20.578070  319582 start.go:125] createHost starting for "" (driver="docker")
	W1217 00:44:19.532393  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	W1217 00:44:21.533119  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	W1217 00:44:19.010486  313838 node_ready.go:57] node "kindnet-802249" has "Ready":"False" status (will retry)
	W1217 00:44:21.510710  313838 node_ready.go:57] node "kindnet-802249" has "Ready":"False" status (will retry)
	I1217 00:44:23.010272  313838 node_ready.go:49] node "kindnet-802249" is "Ready"
	I1217 00:44:23.010301  313838 node_ready.go:38] duration metric: took 10.503419159s for node "kindnet-802249" to be "Ready" ...
	I1217 00:44:23.010316  313838 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:44:23.010366  313838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:23.023570  313838 api_server.go:72] duration metric: took 10.850225274s to wait for apiserver process to appear ...
	I1217 00:44:23.023598  313838 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:44:23.023617  313838 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 00:44:23.029047  313838 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1217 00:44:23.030072  313838 api_server.go:141] control plane version: v1.34.2
	I1217 00:44:23.030101  313838 api_server.go:131] duration metric: took 6.495577ms to wait for apiserver health ...
	I1217 00:44:23.030111  313838 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:44:23.035470  313838 system_pods.go:59] 8 kube-system pods found
	I1217 00:44:23.035510  313838 system_pods.go:61] "coredns-66bc5c9577-7p575" [06ee85a9-892e-40f5-adf9-42882599366f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:44:23.035524  313838 system_pods.go:61] "etcd-kindnet-802249" [a9cfcb70-d987-439c-8579-c1111de52883] Running
	I1217 00:44:23.035539  313838 system_pods.go:61] "kindnet-c6fx4" [fc92c1bc-53fb-49d3-92c2-0c698d1961fb] Running
	I1217 00:44:23.035549  313838 system_pods.go:61] "kube-apiserver-kindnet-802249" [7714e874-38d7-44b5-91f6-28244fbd6e7b] Running
	I1217 00:44:23.035558  313838 system_pods.go:61] "kube-controller-manager-kindnet-802249" [8367fc83-e848-414e-868e-26e124b5399c] Running
	I1217 00:44:23.035569  313838 system_pods.go:61] "kube-proxy-zgfw2" [f3bb7dc0-0c90-4a78-a297-06910340ef6d] Running
	I1217 00:44:23.035579  313838 system_pods.go:61] "kube-scheduler-kindnet-802249" [e873311a-3af1-4901-9080-7e37437a542a] Running
	I1217 00:44:23.035592  313838 system_pods.go:61] "storage-provisioner" [8075127e-89ff-4d27-b16d-5d615bb18953] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:44:23.035604  313838 system_pods.go:74] duration metric: took 5.486672ms to wait for pod list to return data ...
	I1217 00:44:23.035629  313838 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:44:23.038123  313838 default_sa.go:45] found service account: "default"
	I1217 00:44:23.038148  313838 default_sa.go:55] duration metric: took 2.505084ms for default service account to be created ...
	I1217 00:44:23.038157  313838 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 00:44:23.040441  313838 system_pods.go:86] 8 kube-system pods found
	I1217 00:44:23.040474  313838 system_pods.go:89] "coredns-66bc5c9577-7p575" [06ee85a9-892e-40f5-adf9-42882599366f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:44:23.040481  313838 system_pods.go:89] "etcd-kindnet-802249" [a9cfcb70-d987-439c-8579-c1111de52883] Running
	I1217 00:44:23.040488  313838 system_pods.go:89] "kindnet-c6fx4" [fc92c1bc-53fb-49d3-92c2-0c698d1961fb] Running
	I1217 00:44:23.040494  313838 system_pods.go:89] "kube-apiserver-kindnet-802249" [7714e874-38d7-44b5-91f6-28244fbd6e7b] Running
	I1217 00:44:23.040500  313838 system_pods.go:89] "kube-controller-manager-kindnet-802249" [8367fc83-e848-414e-868e-26e124b5399c] Running
	I1217 00:44:23.040507  313838 system_pods.go:89] "kube-proxy-zgfw2" [f3bb7dc0-0c90-4a78-a297-06910340ef6d] Running
	I1217 00:44:23.040513  313838 system_pods.go:89] "kube-scheduler-kindnet-802249" [e873311a-3af1-4901-9080-7e37437a542a] Running
	I1217 00:44:23.040520  313838 system_pods.go:89] "storage-provisioner" [8075127e-89ff-4d27-b16d-5d615bb18953] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:44:23.040546  313838 retry.go:31] will retry after 294.943902ms: missing components: kube-dns
	I1217 00:44:23.339316  313838 system_pods.go:86] 8 kube-system pods found
	I1217 00:44:23.339347  313838 system_pods.go:89] "coredns-66bc5c9577-7p575" [06ee85a9-892e-40f5-adf9-42882599366f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:44:23.339353  313838 system_pods.go:89] "etcd-kindnet-802249" [a9cfcb70-d987-439c-8579-c1111de52883] Running
	I1217 00:44:23.339359  313838 system_pods.go:89] "kindnet-c6fx4" [fc92c1bc-53fb-49d3-92c2-0c698d1961fb] Running
	I1217 00:44:23.339365  313838 system_pods.go:89] "kube-apiserver-kindnet-802249" [7714e874-38d7-44b5-91f6-28244fbd6e7b] Running
	I1217 00:44:23.339369  313838 system_pods.go:89] "kube-controller-manager-kindnet-802249" [8367fc83-e848-414e-868e-26e124b5399c] Running
	I1217 00:44:23.339373  313838 system_pods.go:89] "kube-proxy-zgfw2" [f3bb7dc0-0c90-4a78-a297-06910340ef6d] Running
	I1217 00:44:23.339376  313838 system_pods.go:89] "kube-scheduler-kindnet-802249" [e873311a-3af1-4901-9080-7e37437a542a] Running
	I1217 00:44:23.339381  313838 system_pods.go:89] "storage-provisioner" [8075127e-89ff-4d27-b16d-5d615bb18953] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:44:23.339397  313838 retry.go:31] will retry after 384.557211ms: missing components: kube-dns
	I1217 00:44:23.728486  313838 system_pods.go:86] 8 kube-system pods found
	I1217 00:44:23.728524  313838 system_pods.go:89] "coredns-66bc5c9577-7p575" [06ee85a9-892e-40f5-adf9-42882599366f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:44:23.728532  313838 system_pods.go:89] "etcd-kindnet-802249" [a9cfcb70-d987-439c-8579-c1111de52883] Running
	I1217 00:44:23.728541  313838 system_pods.go:89] "kindnet-c6fx4" [fc92c1bc-53fb-49d3-92c2-0c698d1961fb] Running
	I1217 00:44:23.728547  313838 system_pods.go:89] "kube-apiserver-kindnet-802249" [7714e874-38d7-44b5-91f6-28244fbd6e7b] Running
	I1217 00:44:23.728553  313838 system_pods.go:89] "kube-controller-manager-kindnet-802249" [8367fc83-e848-414e-868e-26e124b5399c] Running
	I1217 00:44:23.728563  313838 system_pods.go:89] "kube-proxy-zgfw2" [f3bb7dc0-0c90-4a78-a297-06910340ef6d] Running
	I1217 00:44:23.728568  313838 system_pods.go:89] "kube-scheduler-kindnet-802249" [e873311a-3af1-4901-9080-7e37437a542a] Running
	I1217 00:44:23.728580  313838 system_pods.go:89] "storage-provisioner" [8075127e-89ff-4d27-b16d-5d615bb18953] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:44:23.728602  313838 retry.go:31] will retry after 335.761178ms: missing components: kube-dns
	I1217 00:44:20.580256  319582 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 00:44:20.580434  319582 start.go:159] libmachine.API.Create for "calico-802249" (driver="docker")
	I1217 00:44:20.580461  319582 client.go:173] LocalClient.Create starting
	I1217 00:44:20.580542  319582 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem
	I1217 00:44:20.580571  319582 main.go:143] libmachine: Decoding PEM data...
	I1217 00:44:20.580591  319582 main.go:143] libmachine: Parsing certificate...
	I1217 00:44:20.580633  319582 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem
	I1217 00:44:20.580655  319582 main.go:143] libmachine: Decoding PEM data...
	I1217 00:44:20.580667  319582 main.go:143] libmachine: Parsing certificate...
	I1217 00:44:20.580985  319582 cli_runner.go:164] Run: docker network inspect calico-802249 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 00:44:20.596841  319582 cli_runner.go:211] docker network inspect calico-802249 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 00:44:20.596893  319582 network_create.go:284] running [docker network inspect calico-802249] to gather additional debugging logs...
	I1217 00:44:20.596907  319582 cli_runner.go:164] Run: docker network inspect calico-802249
	W1217 00:44:20.613275  319582 cli_runner.go:211] docker network inspect calico-802249 returned with exit code 1
	I1217 00:44:20.613310  319582 network_create.go:287] error running [docker network inspect calico-802249]: docker network inspect calico-802249: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-802249 not found
	I1217 00:44:20.613323  319582 network_create.go:289] output of [docker network inspect calico-802249]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-802249 not found
	
	** /stderr **
	I1217 00:44:20.613402  319582 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:44:20.630393  319582 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ffd1d738f01 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3d:52:75:47:82} reservation:<nil>}
	I1217 00:44:20.631199  319582 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-280edd437675 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:ae:02:b5:f9:a6} reservation:<nil>}
	I1217 00:44:20.631966  319582 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9f28d049043c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:3f:8e:e9:44:56} reservation:<nil>}
	I1217 00:44:20.632466  319582 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a57026acfc12 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:aa:e6:32:39:49:3b} reservation:<nil>}
	I1217 00:44:20.633223  319582 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f44be0}
	I1217 00:44:20.633246  319582 network_create.go:124] attempt to create docker network calico-802249 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1217 00:44:20.633292  319582 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-802249 calico-802249
	I1217 00:44:20.681033  319582 network_create.go:108] docker network calico-802249 192.168.85.0/24 created
	I1217 00:44:20.681059  319582 kic.go:121] calculated static IP "192.168.85.2" for the "calico-802249" container
	I1217 00:44:20.681121  319582 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 00:44:20.697843  319582 cli_runner.go:164] Run: docker volume create calico-802249 --label name.minikube.sigs.k8s.io=calico-802249 --label created_by.minikube.sigs.k8s.io=true
	I1217 00:44:20.716485  319582 oci.go:103] Successfully created a docker volume calico-802249
	I1217 00:44:20.716567  319582 cli_runner.go:164] Run: docker run --rm --name calico-802249-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-802249 --entrypoint /usr/bin/test -v calico-802249:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 00:44:21.154615  319582 oci.go:107] Successfully prepared a docker volume calico-802249
	I1217 00:44:21.154683  319582 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:44:21.154696  319582 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 00:44:21.154746  319582 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-802249:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 00:44:25.037433  319582 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-802249:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.882638898s)
	I1217 00:44:25.037469  319582 kic.go:203] duration metric: took 3.88276888s to extract preloaded images to volume ...
	W1217 00:44:25.037592  319582 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 00:44:25.037634  319582 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 00:44:25.037684  319582 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 00:44:25.093494  319582 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-802249 --name calico-802249 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-802249 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-802249 --network calico-802249 --ip 192.168.85.2 --volume calico-802249:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 00:44:25.366011  319582 cli_runner.go:164] Run: docker container inspect calico-802249 --format={{.State.Running}}
	I1217 00:44:25.384191  319582 cli_runner.go:164] Run: docker container inspect calico-802249 --format={{.State.Status}}
	I1217 00:44:24.278427  313838 system_pods.go:86] 8 kube-system pods found
	I1217 00:44:24.278470  313838 system_pods.go:89] "coredns-66bc5c9577-7p575" [06ee85a9-892e-40f5-adf9-42882599366f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:44:24.278479  313838 system_pods.go:89] "etcd-kindnet-802249" [a9cfcb70-d987-439c-8579-c1111de52883] Running
	I1217 00:44:24.278486  313838 system_pods.go:89] "kindnet-c6fx4" [fc92c1bc-53fb-49d3-92c2-0c698d1961fb] Running
	I1217 00:44:24.278492  313838 system_pods.go:89] "kube-apiserver-kindnet-802249" [7714e874-38d7-44b5-91f6-28244fbd6e7b] Running
	I1217 00:44:24.278497  313838 system_pods.go:89] "kube-controller-manager-kindnet-802249" [8367fc83-e848-414e-868e-26e124b5399c] Running
	I1217 00:44:24.278503  313838 system_pods.go:89] "kube-proxy-zgfw2" [f3bb7dc0-0c90-4a78-a297-06910340ef6d] Running
	I1217 00:44:24.278509  313838 system_pods.go:89] "kube-scheduler-kindnet-802249" [e873311a-3af1-4901-9080-7e37437a542a] Running
	I1217 00:44:24.278516  313838 system_pods.go:89] "storage-provisioner" [8075127e-89ff-4d27-b16d-5d615bb18953] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:44:24.278536  313838 retry.go:31] will retry after 418.539702ms: missing components: kube-dns
	I1217 00:44:24.710200  313838 system_pods.go:86] 8 kube-system pods found
	I1217 00:44:24.710235  313838 system_pods.go:89] "coredns-66bc5c9577-7p575" [06ee85a9-892e-40f5-adf9-42882599366f] Running
	I1217 00:44:24.710243  313838 system_pods.go:89] "etcd-kindnet-802249" [a9cfcb70-d987-439c-8579-c1111de52883] Running
	I1217 00:44:24.710248  313838 system_pods.go:89] "kindnet-c6fx4" [fc92c1bc-53fb-49d3-92c2-0c698d1961fb] Running
	I1217 00:44:24.710253  313838 system_pods.go:89] "kube-apiserver-kindnet-802249" [7714e874-38d7-44b5-91f6-28244fbd6e7b] Running
	I1217 00:44:24.710259  313838 system_pods.go:89] "kube-controller-manager-kindnet-802249" [8367fc83-e848-414e-868e-26e124b5399c] Running
	I1217 00:44:24.710264  313838 system_pods.go:89] "kube-proxy-zgfw2" [f3bb7dc0-0c90-4a78-a297-06910340ef6d] Running
	I1217 00:44:24.710272  313838 system_pods.go:89] "kube-scheduler-kindnet-802249" [e873311a-3af1-4901-9080-7e37437a542a] Running
	I1217 00:44:24.710278  313838 system_pods.go:89] "storage-provisioner" [8075127e-89ff-4d27-b16d-5d615bb18953] Running
	I1217 00:44:24.710288  313838 system_pods.go:126] duration metric: took 1.672124288s to wait for k8s-apps to be running ...
	I1217 00:44:24.710303  313838 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 00:44:24.710355  313838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:44:24.723823  313838 system_svc.go:56] duration metric: took 13.498592ms WaitForService to wait for kubelet
	I1217 00:44:24.723849  313838 kubeadm.go:587] duration metric: took 12.550509636s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:44:24.723865  313838 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:44:24.792708  313838 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 00:44:24.792748  313838 node_conditions.go:123] node cpu capacity is 8
	I1217 00:44:24.792767  313838 node_conditions.go:105] duration metric: took 68.896464ms to run NodePressure ...
	I1217 00:44:24.792781  313838 start.go:242] waiting for startup goroutines ...
	I1217 00:44:24.792791  313838 start.go:247] waiting for cluster config update ...
	I1217 00:44:24.792805  313838 start.go:256] writing updated cluster config ...
	I1217 00:44:24.793140  313838 ssh_runner.go:195] Run: rm -f paused
	I1217 00:44:24.797054  313838 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:44:24.882662  313838 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7p575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:24.887094  313838 pod_ready.go:94] pod "coredns-66bc5c9577-7p575" is "Ready"
	I1217 00:44:24.887120  313838 pod_ready.go:86] duration metric: took 4.427201ms for pod "coredns-66bc5c9577-7p575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:24.889066  313838 pod_ready.go:83] waiting for pod "etcd-kindnet-802249" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:24.892687  313838 pod_ready.go:94] pod "etcd-kindnet-802249" is "Ready"
	I1217 00:44:24.892704  313838 pod_ready.go:86] duration metric: took 3.618793ms for pod "etcd-kindnet-802249" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:24.894439  313838 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-802249" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:24.898224  313838 pod_ready.go:94] pod "kube-apiserver-kindnet-802249" is "Ready"
	I1217 00:44:24.898246  313838 pod_ready.go:86] duration metric: took 3.787233ms for pod "kube-apiserver-kindnet-802249" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:24.900076  313838 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-802249" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:25.200940  313838 pod_ready.go:94] pod "kube-controller-manager-kindnet-802249" is "Ready"
	I1217 00:44:25.200964  313838 pod_ready.go:86] duration metric: took 300.87247ms for pod "kube-controller-manager-kindnet-802249" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:25.401734  313838 pod_ready.go:83] waiting for pod "kube-proxy-zgfw2" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:25.801456  313838 pod_ready.go:94] pod "kube-proxy-zgfw2" is "Ready"
	I1217 00:44:25.801482  313838 pod_ready.go:86] duration metric: took 399.721184ms for pod "kube-proxy-zgfw2" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:26.002017  313838 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-802249" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:26.400925  313838 pod_ready.go:94] pod "kube-scheduler-kindnet-802249" is "Ready"
	I1217 00:44:26.400951  313838 pod_ready.go:86] duration metric: took 398.90712ms for pod "kube-scheduler-kindnet-802249" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:44:26.400963  313838 pod_ready.go:40] duration metric: took 1.603874856s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:44:26.446501  313838 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1217 00:44:26.448292  313838 out.go:179] * Done! kubectl is now configured to use "kindnet-802249" cluster and "default" namespace by default
	W1217 00:44:24.033295  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	W1217 00:44:26.532629  307526 node_ready.go:57] node "auto-802249" has "Ready":"False" status (will retry)
	I1217 00:44:25.403093  319582 cli_runner.go:164] Run: docker exec calico-802249 stat /var/lib/dpkg/alternatives/iptables
	I1217 00:44:25.452233  319582 oci.go:144] the created container "calico-802249" has a running status.
	I1217 00:44:25.452265  319582 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/calico-802249/id_rsa...
	I1217 00:44:25.515430  319582 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-12816/.minikube/machines/calico-802249/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 00:44:25.544415  319582 cli_runner.go:164] Run: docker container inspect calico-802249 --format={{.State.Status}}
	I1217 00:44:25.562228  319582 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 00:44:25.562260  319582 kic_runner.go:114] Args: [docker exec --privileged calico-802249 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 00:44:25.633589  319582 cli_runner.go:164] Run: docker container inspect calico-802249 --format={{.State.Status}}
	I1217 00:44:25.661963  319582 machine.go:94] provisionDockerMachine start ...
	I1217 00:44:25.662130  319582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-802249
	I1217 00:44:25.687878  319582 main.go:143] libmachine: Using SSH client type: native
	I1217 00:44:25.688164  319582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1217 00:44:25.688182  319582 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:44:25.824334  319582 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-802249
	
	I1217 00:44:25.824361  319582 ubuntu.go:182] provisioning hostname "calico-802249"
	I1217 00:44:25.824424  319582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-802249
	I1217 00:44:25.843884  319582 main.go:143] libmachine: Using SSH client type: native
	I1217 00:44:25.844198  319582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1217 00:44:25.844222  319582 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-802249 && echo "calico-802249" | sudo tee /etc/hostname
	I1217 00:44:25.985487  319582 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-802249
	
	I1217 00:44:25.985583  319582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-802249
	I1217 00:44:26.005922  319582 main.go:143] libmachine: Using SSH client type: native
	I1217 00:44:26.006162  319582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1217 00:44:26.006187  319582 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-802249' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-802249/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-802249' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:44:26.134525  319582 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:44:26.134550  319582 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12816/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12816/.minikube}
	I1217 00:44:26.134588  319582 ubuntu.go:190] setting up certificates
	I1217 00:44:26.134601  319582 provision.go:84] configureAuth start
	I1217 00:44:26.134651  319582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-802249
	I1217 00:44:26.152007  319582 provision.go:143] copyHostCerts
	I1217 00:44:26.152066  319582 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem, removing ...
	I1217 00:44:26.152089  319582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem
	I1217 00:44:26.152170  319582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/cert.pem (1123 bytes)
	I1217 00:44:26.152304  319582 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem, removing ...
	I1217 00:44:26.152320  319582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem
	I1217 00:44:26.152362  319582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/key.pem (1679 bytes)
	I1217 00:44:26.152553  319582 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem, removing ...
	I1217 00:44:26.152571  319582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem
	I1217 00:44:26.152618  319582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12816/.minikube/ca.pem (1078 bytes)
	I1217 00:44:26.152700  319582 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem org=jenkins.calico-802249 san=[127.0.0.1 192.168.85.2 calico-802249 localhost minikube]
	I1217 00:44:26.227431  319582 provision.go:177] copyRemoteCerts
	I1217 00:44:26.227481  319582 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:44:26.227519  319582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-802249
	I1217 00:44:26.246084  319582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/calico-802249/id_rsa Username:docker}
	I1217 00:44:26.338029  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 00:44:26.356578  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:44:26.374546  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:44:26.391750  319582 provision.go:87] duration metric: took 257.128596ms to configureAuth
	I1217 00:44:26.391776  319582 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:44:26.391924  319582 config.go:182] Loaded profile config "calico-802249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:44:26.392031  319582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-802249
	I1217 00:44:26.411886  319582 main.go:143] libmachine: Using SSH client type: native
	I1217 00:44:26.412190  319582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1217 00:44:26.412211  319582 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:44:26.675003  319582 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:44:26.675027  319582 machine.go:97] duration metric: took 1.013041442s to provisionDockerMachine
	I1217 00:44:26.675040  319582 client.go:176] duration metric: took 6.094570662s to LocalClient.Create
	I1217 00:44:26.675059  319582 start.go:167] duration metric: took 6.094627011s to libmachine.API.Create "calico-802249"
	I1217 00:44:26.675067  319582 start.go:293] postStartSetup for "calico-802249" (driver="docker")
	I1217 00:44:26.675075  319582 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:44:26.675125  319582 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:44:26.675169  319582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-802249
	I1217 00:44:26.693165  319582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/calico-802249/id_rsa Username:docker}
	I1217 00:44:26.789173  319582 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:44:26.793016  319582 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:44:26.793050  319582 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:44:26.793064  319582 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/addons for local assets ...
	I1217 00:44:26.793130  319582 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12816/.minikube/files for local assets ...
	I1217 00:44:26.793220  319582 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem -> 163542.pem in /etc/ssl/certs
	I1217 00:44:26.793313  319582 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 00:44:26.801278  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:44:26.820968  319582 start.go:296] duration metric: took 145.88909ms for postStartSetup
	I1217 00:44:26.821360  319582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-802249
	I1217 00:44:26.840363  319582 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/config.json ...
	I1217 00:44:26.840641  319582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:44:26.840681  319582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-802249
	I1217 00:44:26.858885  319582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/calico-802249/id_rsa Username:docker}
	I1217 00:44:26.947689  319582 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:44:26.951964  319582 start.go:128] duration metric: took 6.373877572s to createHost
	I1217 00:44:26.951987  319582 start.go:83] releasing machines lock for "calico-802249", held for 6.374010053s
	I1217 00:44:26.952079  319582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-802249
	I1217 00:44:26.970842  319582 ssh_runner.go:195] Run: cat /version.json
	I1217 00:44:26.970880  319582 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:44:26.970888  319582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-802249
	I1217 00:44:26.970951  319582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-802249
	I1217 00:44:26.989508  319582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/calico-802249/id_rsa Username:docker}
	I1217 00:44:26.990719  319582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/calico-802249/id_rsa Username:docker}
	I1217 00:44:27.135646  319582 ssh_runner.go:195] Run: systemctl --version
	I1217 00:44:27.142053  319582 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:44:27.174929  319582 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:44:27.179565  319582 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:44:27.179617  319582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:44:27.204370  319582 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 00:44:27.204394  319582 start.go:496] detecting cgroup driver to use...
	I1217 00:44:27.204426  319582 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 00:44:27.204482  319582 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:44:27.219820  319582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:44:27.231592  319582 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:44:27.231646  319582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:44:27.247220  319582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:44:27.264961  319582 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:44:27.345536  319582 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:44:27.433598  319582 docker.go:234] disabling docker service ...
	I1217 00:44:27.433662  319582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:44:27.451665  319582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:44:27.463965  319582 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:44:27.551315  319582 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:44:27.647267  319582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:44:27.660430  319582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:44:27.676343  319582 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:44:27.676402  319582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:44:27.687080  319582 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 00:44:27.687131  319582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:44:27.695977  319582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:44:27.705954  319582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:44:27.715684  319582 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:44:27.723901  319582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:44:27.733497  319582 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:44:27.748094  319582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:44:27.756711  319582 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:44:27.763711  319582 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:44:27.770663  319582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:44:27.860623  319582 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:44:28.015182  319582 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:44:28.015257  319582 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:44:28.019859  319582 start.go:564] Will wait 60s for crictl version
	I1217 00:44:28.019911  319582 ssh_runner.go:195] Run: which crictl
	I1217 00:44:28.024129  319582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:44:28.051129  319582 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:44:28.051215  319582 ssh_runner.go:195] Run: crio --version
	I1217 00:44:28.081251  319582 ssh_runner.go:195] Run: crio --version
	I1217 00:44:28.109127  319582 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 00:44:28.110183  319582 cli_runner.go:164] Run: docker network inspect calico-802249 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:44:28.127820  319582 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 00:44:28.131677  319582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:44:28.142923  319582 kubeadm.go:884] updating cluster {Name:calico-802249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-802249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:44:28.143075  319582 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:44:28.143137  319582 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:44:28.174524  319582 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:44:28.174547  319582 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:44:28.174596  319582 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:44:28.200982  319582 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:44:28.201016  319582 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:44:28.201026  319582 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1217 00:44:28.201135  319582 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-802249 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:calico-802249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1217 00:44:28.201216  319582 ssh_runner.go:195] Run: crio config
	I1217 00:44:28.257059  319582 cni.go:84] Creating CNI manager for "calico"
	I1217 00:44:28.257098  319582 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:44:28.257119  319582 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-802249 NodeName:calico-802249 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:44:28.257236  319582 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-802249"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:44:28.257308  319582 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 00:44:28.265585  319582 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:44:28.265636  319582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:44:28.273048  319582 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 00:44:28.285948  319582 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 00:44:28.301433  319582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1217 00:44:28.314069  319582 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:44:28.317541  319582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:44:28.327599  319582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:44:28.417717  319582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:44:28.438003  319582 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249 for IP: 192.168.85.2
	I1217 00:44:28.438023  319582 certs.go:195] generating shared ca certs ...
	I1217 00:44:28.438046  319582 certs.go:227] acquiring lock for ca certs: {Name:mk3fafd0dd66863a6056cb02497503a5e6afecf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:44:28.438206  319582 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key
	I1217 00:44:28.438262  319582 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key
	I1217 00:44:28.438275  319582 certs.go:257] generating profile certs ...
	I1217 00:44:28.438352  319582 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/client.key
	I1217 00:44:28.438374  319582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/client.crt with IP's: []
	I1217 00:44:28.594061  319582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/client.crt ...
	I1217 00:44:28.594090  319582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/client.crt: {Name:mk0f33fd9eb0ec9a39ed2527660f28f5b980216e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:44:28.594268  319582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/client.key ...
	I1217 00:44:28.594285  319582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/client.key: {Name:mk2489c51abf85e78b6933083c4e8e1afb653b56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:44:28.594399  319582 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.key.3b2fb64b
	I1217 00:44:28.594419  319582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.crt.3b2fb64b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1217 00:44:28.665654  319582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.crt.3b2fb64b ...
	I1217 00:44:28.665677  319582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.crt.3b2fb64b: {Name:mk9f43cef323fb9e72972c64abf288ee1463490b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:44:28.665806  319582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.key.3b2fb64b ...
	I1217 00:44:28.665819  319582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.key.3b2fb64b: {Name:mk991dd896eccda5a6442518f7641b182c3da38f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:44:28.665900  319582 certs.go:382] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.crt.3b2fb64b -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.crt
	I1217 00:44:28.665982  319582 certs.go:386] copying /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.key.3b2fb64b -> /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.key
	I1217 00:44:28.666072  319582 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/proxy-client.key
	I1217 00:44:28.666090  319582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/proxy-client.crt with IP's: []
	I1217 00:44:28.772114  319582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/proxy-client.crt ...
	I1217 00:44:28.772142  319582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/proxy-client.crt: {Name:mk0ac25c98b5a6c77bc00c7d2d2f81d3b655c72c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:44:28.772329  319582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/proxy-client.key ...
	I1217 00:44:28.772347  319582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/proxy-client.key: {Name:mk3c305449f15905a8ec0a69180c43478a6fe6a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:44:28.772566  319582 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem (1338 bytes)
	W1217 00:44:28.772609  319582 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354_empty.pem, impossibly tiny 0 bytes
	I1217 00:44:28.772618  319582 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:44:28.772650  319582 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:44:28.772678  319582 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:44:28.772705  319582 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/certs/key.pem (1679 bytes)
	I1217 00:44:28.772754  319582 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem (1708 bytes)
	I1217 00:44:28.773309  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:44:28.791255  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 00:44:28.811072  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:44:28.830941  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1217 00:44:28.849912  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 00:44:28.867507  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:44:28.887060  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:44:28.904705  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/calico-802249/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:44:28.921309  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:44:28.940803  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/certs/16354.pem --> /usr/share/ca-certificates/16354.pem (1338 bytes)
	I1217 00:44:28.957611  319582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/ssl/certs/163542.pem --> /usr/share/ca-certificates/163542.pem (1708 bytes)
	I1217 00:44:28.974238  319582 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:44:28.985740  319582 ssh_runner.go:195] Run: openssl version
	I1217 00:44:28.991479  319582 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:44:28.998478  319582 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:44:29.005852  319582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:44:29.009464  319582 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:44:29.009524  319582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:44:29.045443  319582 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:44:29.053385  319582 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 00:44:29.060469  319582 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/16354.pem
	I1217 00:44:29.067439  319582 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/16354.pem /etc/ssl/certs/16354.pem
	I1217 00:44:29.074260  319582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16354.pem
	I1217 00:44:29.077668  319582 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:13 /usr/share/ca-certificates/16354.pem
	I1217 00:44:29.077714  319582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16354.pem
	I1217 00:44:29.113345  319582 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:44:29.120368  319582 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/16354.pem /etc/ssl/certs/51391683.0
	I1217 00:44:29.127450  319582 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/163542.pem
	I1217 00:44:29.134913  319582 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/163542.pem /etc/ssl/certs/163542.pem
	I1217 00:44:29.142193  319582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163542.pem
	I1217 00:44:29.145713  319582 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:13 /usr/share/ca-certificates/163542.pem
	I1217 00:44:29.145761  319582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163542.pem
	I1217 00:44:29.180919  319582 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:44:29.188000  319582 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/163542.pem /etc/ssl/certs/3ec20f2e.0
	I1217 00:44:29.194920  319582 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:44:29.198258  319582 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 00:44:29.198310  319582 kubeadm.go:401] StartCluster: {Name:calico-802249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-802249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:44:29.198375  319582 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:44:29.198406  319582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:44:29.224212  319582 cri.go:89] found id: ""
	I1217 00:44:29.224270  319582 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:44:29.232058  319582 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:44:29.239714  319582 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:44:29.239760  319582 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:44:29.247085  319582 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:44:29.247104  319582 kubeadm.go:158] found existing configuration files:
	
	I1217 00:44:29.247154  319582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 00:44:29.254280  319582 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:44:29.254326  319582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:44:29.261387  319582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 00:44:29.268820  319582 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:44:29.268858  319582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:44:29.276159  319582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 00:44:29.283902  319582 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:44:29.283958  319582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:44:29.291708  319582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 00:44:29.299893  319582 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:44:29.299945  319582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:44:29.307435  319582 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:44:29.346623  319582 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1217 00:44:29.346701  319582 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:44:29.368094  319582 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:44:29.368197  319582 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 00:44:29.368252  319582 kubeadm.go:319] OS: Linux
	I1217 00:44:29.368322  319582 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:44:29.368388  319582 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:44:29.368477  319582 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:44:29.368547  319582 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:44:29.368627  319582 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:44:29.368786  319582 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:44:29.368873  319582 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:44:29.368944  319582 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 00:44:29.438236  319582 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:44:29.438367  319582 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:44:29.438493  319582 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:44:29.446354  319582 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:44:29.448351  319582 out.go:252]   - Generating certificates and keys ...
	I1217 00:44:29.448457  319582 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:44:29.448565  319582 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:44:29.630704  319582 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 00:44:29.911022  319582 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 00:44:30.230096  319582 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	
	
	==> CRI-O <==
	Dec 17 00:43:49 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:43:49.390316678Z" level=info msg="Started container" PID=1743 containerID=f4fe7e2efb6c9874e3b2f9dacc235373b8e72b902ea1ddeb7fd1caa95111c574 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr/dashboard-metrics-scraper id=b4da63f3-f49e-4edb-a531-a130b5cef0f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3cd8ddbd09d24f2de4ff24d7acc1878a1f5db8baa6841072707f41b4ac6bf783
	Dec 17 00:43:50 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:43:50.326671524Z" level=info msg="Removing container: 6a4901560df386b43625704e1859edc7fbe21d9c08ece38745b6655e35020604" id=f37be39e-1f1d-4ea2-8ac5-3334e639742f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:43:50 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:43:50.337163789Z" level=info msg="Removed container 6a4901560df386b43625704e1859edc7fbe21d9c08ece38745b6655e35020604: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr/dashboard-metrics-scraper" id=f37be39e-1f1d-4ea2-8ac5-3334e639742f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:44:08 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:08.370738256Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0b71ac28-61bd-44c6-9227-b9df7bf02c72 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:44:08 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:08.371668142Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7873d776-f549-4b0d-89e1-e021109af006 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:44:08 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:08.372682328Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=0482844c-d87a-4538-ac64-881ff7d5860b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:44:08 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:08.372818746Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:44:08 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:08.377561777Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:44:08 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:08.377689947Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7bf7c7b2e63d17d9205cb2138cda310c280d8e5ccbb472bf9c96acd6b3201272/merged/etc/passwd: no such file or directory"
	Dec 17 00:44:08 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:08.377710891Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7bf7c7b2e63d17d9205cb2138cda310c280d8e5ccbb472bf9c96acd6b3201272/merged/etc/group: no such file or directory"
	Dec 17 00:44:08 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:08.377912122Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:44:08 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:08.430244555Z" level=info msg="Created container 29dcc2e0fca01e5acc47fd9e7b42b73755a799de4a843cc7448f1cf3d24c1370: kube-system/storage-provisioner/storage-provisioner" id=0482844c-d87a-4538-ac64-881ff7d5860b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:44:08 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:08.430875327Z" level=info msg="Starting container: 29dcc2e0fca01e5acc47fd9e7b42b73755a799de4a843cc7448f1cf3d24c1370" id=0a755e85-d7b3-472c-9730-d2d4ed609112 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:44:08 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:08.432653105Z" level=info msg="Started container" PID=1761 containerID=29dcc2e0fca01e5acc47fd9e7b42b73755a799de4a843cc7448f1cf3d24c1370 description=kube-system/storage-provisioner/storage-provisioner id=0a755e85-d7b3-472c-9730-d2d4ed609112 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c9475c845c680cb0fbd371608c513dfc36cff9c0d8f39335db188ead17a1dd4a
	Dec 17 00:44:12 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:12.262304069Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=09578481-1742-4aa5-baff-86a940a7efd9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:44:12 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:12.263484023Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=25e9fe98-ac0a-4e95-ba7e-a2083fa7eb89 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:44:12 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:12.265095166Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr/dashboard-metrics-scraper" id=902c95a2-14db-4fee-9cbd-93e054ab7b6d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:44:12 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:12.265307275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:44:12 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:12.271630407Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:44:12 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:12.272361979Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:44:12 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:12.298697486Z" level=info msg="Created container d2a2a6abdc96c42c27ab0c3e8b49c402a202b687de42012d4e22faf078a53746: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr/dashboard-metrics-scraper" id=902c95a2-14db-4fee-9cbd-93e054ab7b6d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:44:12 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:12.299939127Z" level=info msg="Starting container: d2a2a6abdc96c42c27ab0c3e8b49c402a202b687de42012d4e22faf078a53746" id=45dd9baa-cbeb-413a-b471-cf9461351346 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:44:12 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:12.302297717Z" level=info msg="Started container" PID=1777 containerID=d2a2a6abdc96c42c27ab0c3e8b49c402a202b687de42012d4e22faf078a53746 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr/dashboard-metrics-scraper id=45dd9baa-cbeb-413a-b471-cf9461351346 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3cd8ddbd09d24f2de4ff24d7acc1878a1f5db8baa6841072707f41b4ac6bf783
	Dec 17 00:44:12 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:12.388535374Z" level=info msg="Removing container: f4fe7e2efb6c9874e3b2f9dacc235373b8e72b902ea1ddeb7fd1caa95111c574" id=454acaae-017c-47d3-a1e2-e6ad5365a98c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:44:12 default-k8s-diff-port-414413 crio[566]: time="2025-12-17T00:44:12.402153655Z" level=info msg="Removed container f4fe7e2efb6c9874e3b2f9dacc235373b8e72b902ea1ddeb7fd1caa95111c574: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr/dashboard-metrics-scraper" id=454acaae-017c-47d3-a1e2-e6ad5365a98c name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	d2a2a6abdc96c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   3cd8ddbd09d24       dashboard-metrics-scraper-6ffb444bf9-pwxgr             kubernetes-dashboard
	29dcc2e0fca01       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   c9475c845c680       storage-provisioner                                    kube-system
	554f0df62e1c2       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   fa4f371ed44d2       kubernetes-dashboard-855c9754f9-wnwc6                  kubernetes-dashboard
	b82b299f948d7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   4159bb6b9cc1a       coredns-66bc5c9577-v76f4                               kube-system
	1a5294d009027       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   1793c737adaeb       busybox                                                default
	275d3d03f2346       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   c9475c845c680       storage-provisioner                                    kube-system
	bbca296c30f3d       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           54 seconds ago      Running             kube-proxy                  0                   373424454badc       kube-proxy-prlkw                                       kube-system
	1b6d441ac73c0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   4cb9b78188fe4       kindnet-hxhbf                                          kube-system
	2a7b291de067a       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           57 seconds ago      Running             kube-apiserver              0                   4b52776844a99       kube-apiserver-default-k8s-diff-port-414413            kube-system
	4dcc77a289bba       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           57 seconds ago      Running             etcd                        0                   b9e3daed73cb4       etcd-default-k8s-diff-port-414413                      kube-system
	ba3df04c6b3fe       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           57 seconds ago      Running             kube-scheduler              0                   0e39de61468ce       kube-scheduler-default-k8s-diff-port-414413            kube-system
	eecadcae34c36       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           57 seconds ago      Running             kube-controller-manager     0                   9e0781fc25868       kube-controller-manager-default-k8s-diff-port-414413   kube-system
	
	
	==> coredns [b82b299f948d717658d6977755447250d679af51d1b6071b37f467e8810d95bf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34951 - 9759 "HINFO IN 2097766802274907779.5305395646306382892. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.492289075s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-414413
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-414413
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=default-k8s-diff-port-414413
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_42_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:42:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-414413
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:44:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:44:27 +0000   Wed, 17 Dec 2025 00:42:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:44:27 +0000   Wed, 17 Dec 2025 00:42:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:44:27 +0000   Wed, 17 Dec 2025 00:42:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 00:44:27 +0000   Wed, 17 Dec 2025 00:42:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-414413
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 f435001bb2f82675fe905cfc693dda54
	  System UUID:                30e488d3-49b2-4dae-91a3-bdf1e8cb0774
	  Boot ID:                    0e9cedc6-c46e-4354-b3d2-9272a8b33ae5
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-v76f4                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-default-k8s-diff-port-414413                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-hxhbf                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-default-k8s-diff-port-414413             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-414413    200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-prlkw                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-default-k8s-diff-port-414413             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-pwxgr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wnwc6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 110s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  117s               kubelet          Node default-k8s-diff-port-414413 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s               kubelet          Node default-k8s-diff-port-414413 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s               kubelet          Node default-k8s-diff-port-414413 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s               node-controller  Node default-k8s-diff-port-414413 event: Registered Node default-k8s-diff-port-414413 in Controller
	  Normal  NodeReady                100s               kubelet          Node default-k8s-diff-port-414413 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-414413 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-414413 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-414413 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                node-controller  Node default-k8s-diff-port-414413 event: Registered Node default-k8s-diff-port-414413 in Controller
	
	
	==> dmesg <==
	[  +0.089382] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024236] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.864694] kauditd_printk_skb: 47 callbacks suppressed
	[Dec17 00:07] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.006904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +2.048755] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +4.030595] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[  +8.447143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[ +16.382404] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000015] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	[Dec17 00:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 00 f8 03 33 1b 0e b2 d7 16 68 94 08 00
	
	
	==> etcd [4dcc77a289bba808ececc2d4f0efa70e966e843b2057d6de5ad0054d0be435c8] <==
	{"level":"warn","ts":"2025-12-17T00:43:35.945210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:35.954650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:35.969323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:35.975906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:35.983544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:35.993400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.010807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.012840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.017117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.024680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.031644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.038156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.044403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.053941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.061944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.069324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.076667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.083136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.090217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.096816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.116463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.124649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.132667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:43:36.185443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56150","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T00:43:47.269702Z","caller":"traceutil/trace.go:172","msg":"trace[76811734] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"124.176168ms","start":"2025-12-17T00:43:47.145505Z","end":"2025-12-17T00:43:47.269682Z","steps":["trace[76811734] 'process raft request'  (duration: 124.057881ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:44:32 up  1:27,  0 user,  load average: 2.84, 2.85, 2.02
	Linux default-k8s-diff-port-414413 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1b6d441ac73c0906999e2b074e7fb8e741006a82fa543a72336ae290aef62cf4] <==
	I1217 00:43:37.862276       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 00:43:37.862602       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1217 00:43:37.862767       1 main.go:148] setting mtu 1500 for CNI 
	I1217 00:43:37.862787       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 00:43:37.862815       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T00:43:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 00:43:38.062320       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 00:43:38.062347       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 00:43:38.062361       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 00:43:38.129736       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 00:43:38.530040       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 00:43:38.530067       1 metrics.go:72] Registering metrics
	I1217 00:43:38.530136       1 controller.go:711] "Syncing nftables rules"
	I1217 00:43:48.062155       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 00:43:48.062197       1 main.go:301] handling current node
	I1217 00:43:58.064458       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 00:43:58.064571       1 main.go:301] handling current node
	I1217 00:44:08.063254       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 00:44:08.063293       1 main.go:301] handling current node
	I1217 00:44:18.062743       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 00:44:18.062799       1 main.go:301] handling current node
	I1217 00:44:28.062857       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1217 00:44:28.062906       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2a7b291de067a5044f406eaa0104c52261424e3730e6c2e4d38864b41943eddd] <==
	I1217 00:43:36.689090       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 00:43:36.689097       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 00:43:36.689103       1 cache.go:39] Caches are synced for autoregister controller
	I1217 00:43:36.689135       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 00:43:36.689170       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 00:43:36.689200       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 00:43:36.689208       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 00:43:36.689228       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 00:43:36.689783       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 00:43:36.700181       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 00:43:36.705127       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 00:43:36.715910       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1217 00:43:36.716377       1 policy_source.go:240] refreshing policies
	I1217 00:43:36.727222       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 00:43:36.994093       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 00:43:37.023797       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 00:43:37.043847       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 00:43:37.050695       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 00:43:37.058095       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 00:43:37.086463       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.127.68"}
	I1217 00:43:37.096116       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.63.172"}
	I1217 00:43:37.594952       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 00:43:40.414329       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 00:43:40.463285       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 00:43:40.564772       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [eecadcae34c3698337c66c6d6dbab2066993e3216b64d194344407552bc449b5] <==
	I1217 00:43:40.010114       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 00:43:40.010160       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 00:43:40.010205       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 00:43:40.010220       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 00:43:40.010230       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 00:43:40.010562       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 00:43:40.011793       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1217 00:43:40.013767       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 00:43:40.013782       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1217 00:43:40.013878       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1217 00:43:40.013937       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 00:43:40.013976       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-414413"
	I1217 00:43:40.014045       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1217 00:43:40.015237       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 00:43:40.016425       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1217 00:43:40.016508       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1217 00:43:40.016549       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 00:43:40.016555       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 00:43:40.016561       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 00:43:40.017802       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 00:43:40.020319       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 00:43:40.020423       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 00:43:40.021637       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 00:43:40.023825       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 00:43:40.031216       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [bbca296c30f3d3f0cca453021716cd6a26728333310fb6dfdeb35c44a6832375] <==
	I1217 00:43:37.652279       1 server_linux.go:53] "Using iptables proxy"
	I1217 00:43:37.717326       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 00:43:37.817895       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 00:43:37.817936       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1217 00:43:37.818057       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 00:43:37.839203       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 00:43:37.839278       1 server_linux.go:132] "Using iptables Proxier"
	I1217 00:43:37.845427       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 00:43:37.845977       1 server.go:527] "Version info" version="v1.34.2"
	I1217 00:43:37.846049       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:43:37.847667       1 config.go:200] "Starting service config controller"
	I1217 00:43:37.847704       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 00:43:37.847712       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 00:43:37.847724       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 00:43:37.847748       1 config.go:106] "Starting endpoint slice config controller"
	I1217 00:43:37.847766       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 00:43:37.847787       1 config.go:309] "Starting node config controller"
	I1217 00:43:37.847797       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 00:43:37.847804       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 00:43:37.948506       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 00:43:37.948538       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 00:43:37.948546       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ba3df04c6b3feaf2f234a1a9b098c1269d844cdbaf6531304d6ddd40b10820d5] <==
	I1217 00:43:35.116203       1 serving.go:386] Generated self-signed cert in-memory
	W1217 00:43:36.609337       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 00:43:36.609400       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 00:43:36.609413       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 00:43:36.609422       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 00:43:36.668390       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1217 00:43:36.669160       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:43:36.676800       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 00:43:36.677107       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:43:36.677131       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:43:36.677153       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 00:43:36.777514       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 00:43:40 default-k8s-diff-port-414413 kubelet[725]: I1217 00:43:40.742380     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9qln\" (UniqueName: \"kubernetes.io/projected/b3449637-778d-417a-b505-434f3216b394-kube-api-access-v9qln\") pod \"dashboard-metrics-scraper-6ffb444bf9-pwxgr\" (UID: \"b3449637-778d-417a-b505-434f3216b394\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr"
	Dec 17 00:43:40 default-k8s-diff-port-414413 kubelet[725]: I1217 00:43:40.742456     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/bd229193-f29a-44ac-a723-c842b5034e75-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-wnwc6\" (UID: \"bd229193-f29a-44ac-a723-c842b5034e75\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wnwc6"
	Dec 17 00:43:40 default-k8s-diff-port-414413 kubelet[725]: I1217 00:43:40.742486     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5f9r\" (UniqueName: \"kubernetes.io/projected/bd229193-f29a-44ac-a723-c842b5034e75-kube-api-access-b5f9r\") pod \"kubernetes-dashboard-855c9754f9-wnwc6\" (UID: \"bd229193-f29a-44ac-a723-c842b5034e75\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wnwc6"
	Dec 17 00:43:40 default-k8s-diff-port-414413 kubelet[725]: I1217 00:43:40.742507     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b3449637-778d-417a-b505-434f3216b394-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-pwxgr\" (UID: \"b3449637-778d-417a-b505-434f3216b394\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr"
	Dec 17 00:43:44 default-k8s-diff-port-414413 kubelet[725]: I1217 00:43:44.395694     725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 17 00:43:46 default-k8s-diff-port-414413 kubelet[725]: I1217 00:43:46.733370     725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wnwc6" podStartSLOduration=2.255547888 podStartE2EDuration="6.733347641s" podCreationTimestamp="2025-12-17 00:43:40 +0000 UTC" firstStartedPulling="2025-12-17 00:43:40.957576642 +0000 UTC m=+6.846042370" lastFinishedPulling="2025-12-17 00:43:45.435376393 +0000 UTC m=+11.323842123" observedRunningTime="2025-12-17 00:43:46.331555585 +0000 UTC m=+12.220021330" watchObservedRunningTime="2025-12-17 00:43:46.733347641 +0000 UTC m=+12.621813375"
	Dec 17 00:43:49 default-k8s-diff-port-414413 kubelet[725]: I1217 00:43:49.321274     725 scope.go:117] "RemoveContainer" containerID="6a4901560df386b43625704e1859edc7fbe21d9c08ece38745b6655e35020604"
	Dec 17 00:43:50 default-k8s-diff-port-414413 kubelet[725]: I1217 00:43:50.325310     725 scope.go:117] "RemoveContainer" containerID="6a4901560df386b43625704e1859edc7fbe21d9c08ece38745b6655e35020604"
	Dec 17 00:43:50 default-k8s-diff-port-414413 kubelet[725]: I1217 00:43:50.325483     725 scope.go:117] "RemoveContainer" containerID="f4fe7e2efb6c9874e3b2f9dacc235373b8e72b902ea1ddeb7fd1caa95111c574"
	Dec 17 00:43:50 default-k8s-diff-port-414413 kubelet[725]: E1217 00:43:50.325696     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pwxgr_kubernetes-dashboard(b3449637-778d-417a-b505-434f3216b394)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr" podUID="b3449637-778d-417a-b505-434f3216b394"
	Dec 17 00:43:51 default-k8s-diff-port-414413 kubelet[725]: I1217 00:43:51.329697     725 scope.go:117] "RemoveContainer" containerID="f4fe7e2efb6c9874e3b2f9dacc235373b8e72b902ea1ddeb7fd1caa95111c574"
	Dec 17 00:43:51 default-k8s-diff-port-414413 kubelet[725]: E1217 00:43:51.329886     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pwxgr_kubernetes-dashboard(b3449637-778d-417a-b505-434f3216b394)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr" podUID="b3449637-778d-417a-b505-434f3216b394"
	Dec 17 00:43:57 default-k8s-diff-port-414413 kubelet[725]: I1217 00:43:57.746137     725 scope.go:117] "RemoveContainer" containerID="f4fe7e2efb6c9874e3b2f9dacc235373b8e72b902ea1ddeb7fd1caa95111c574"
	Dec 17 00:43:57 default-k8s-diff-port-414413 kubelet[725]: E1217 00:43:57.746375     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pwxgr_kubernetes-dashboard(b3449637-778d-417a-b505-434f3216b394)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr" podUID="b3449637-778d-417a-b505-434f3216b394"
	Dec 17 00:44:08 default-k8s-diff-port-414413 kubelet[725]: I1217 00:44:08.370314     725 scope.go:117] "RemoveContainer" containerID="275d3d03f2346fc781571f2f61dc5d70168875e4ee6e2e5783f3893a19e24e67"
	Dec 17 00:44:12 default-k8s-diff-port-414413 kubelet[725]: I1217 00:44:12.260012     725 scope.go:117] "RemoveContainer" containerID="f4fe7e2efb6c9874e3b2f9dacc235373b8e72b902ea1ddeb7fd1caa95111c574"
	Dec 17 00:44:12 default-k8s-diff-port-414413 kubelet[725]: I1217 00:44:12.385958     725 scope.go:117] "RemoveContainer" containerID="f4fe7e2efb6c9874e3b2f9dacc235373b8e72b902ea1ddeb7fd1caa95111c574"
	Dec 17 00:44:12 default-k8s-diff-port-414413 kubelet[725]: I1217 00:44:12.386484     725 scope.go:117] "RemoveContainer" containerID="d2a2a6abdc96c42c27ab0c3e8b49c402a202b687de42012d4e22faf078a53746"
	Dec 17 00:44:12 default-k8s-diff-port-414413 kubelet[725]: E1217 00:44:12.386690     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pwxgr_kubernetes-dashboard(b3449637-778d-417a-b505-434f3216b394)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr" podUID="b3449637-778d-417a-b505-434f3216b394"
	Dec 17 00:44:17 default-k8s-diff-port-414413 kubelet[725]: I1217 00:44:17.746350     725 scope.go:117] "RemoveContainer" containerID="d2a2a6abdc96c42c27ab0c3e8b49c402a202b687de42012d4e22faf078a53746"
	Dec 17 00:44:17 default-k8s-diff-port-414413 kubelet[725]: E1217 00:44:17.746540     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pwxgr_kubernetes-dashboard(b3449637-778d-417a-b505-434f3216b394)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pwxgr" podUID="b3449637-778d-417a-b505-434f3216b394"
	Dec 17 00:44:28 default-k8s-diff-port-414413 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 00:44:28 default-k8s-diff-port-414413 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 00:44:28 default-k8s-diff-port-414413 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:44:28 default-k8s-diff-port-414413 systemd[1]: kubelet.service: Consumed 1.733s CPU time.
	
	
	==> kubernetes-dashboard [554f0df62e1c2a39c6dcfbc1c0ee65889b3ab428dc9ed21a3ca89b258910f564] <==
	2025/12/17 00:43:45 Using namespace: kubernetes-dashboard
	2025/12/17 00:43:45 Using in-cluster config to connect to apiserver
	2025/12/17 00:43:45 Using secret token for csrf signing
	2025/12/17 00:43:45 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 00:43:45 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 00:43:45 Successful initial request to the apiserver, version: v1.34.2
	2025/12/17 00:43:45 Generating JWE encryption key
	2025/12/17 00:43:45 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 00:43:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 00:43:45 Initializing JWE encryption key from synchronized object
	2025/12/17 00:43:45 Creating in-cluster Sidecar client
	2025/12/17 00:43:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 00:43:45 Serving insecurely on HTTP port: 9090
	2025/12/17 00:44:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 00:43:45 Starting overwatch
	
	
	==> storage-provisioner [275d3d03f2346fc781571f2f61dc5d70168875e4ee6e2e5783f3893a19e24e67] <==
	I1217 00:43:37.619185       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 00:44:07.621558       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [29dcc2e0fca01e5acc47fd9e7b42b73755a799de4a843cc7448f1cf3d24c1370] <==
	I1217 00:44:08.444564       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 00:44:08.451081       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 00:44:08.451125       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 00:44:08.453171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:11.908671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:16.168708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:19.767762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:22.822195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:25.844326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:25.848700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 00:44:25.848883       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 00:44:25.849051       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9cb29e54-a67b-4f6f-a2d9-d357efab670a", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-414413_7f1e76f4-69de-441e-a1bb-f1aed0cd50bd became leader
	I1217 00:44:25.849104       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-414413_7f1e76f4-69de-441e-a1bb-f1aed0cd50bd!
	W1217 00:44:25.851016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:25.855207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 00:44:25.949358       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-414413_7f1e76f4-69de-441e-a1bb-f1aed0cd50bd!
	W1217 00:44:27.858898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:27.863123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:29.866914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:29.871332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:31.875257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:44:31.879747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-414413 -n default-k8s-diff-port-414413
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-414413 -n default-k8s-diff-port-414413: exit status 2 (355.748282ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-414413 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.42s)

                                                
                                    

Test pass (355/415)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.18
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.2/json-events 3.52
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.07
18 TestDownloadOnly/v1.34.2/DeleteAll 0.21
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.12
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.21
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
29 TestDownloadOnlyKic 0.4
30 TestBinaryMirror 0.8
31 TestOffline 59.26
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 121.53
40 TestAddons/serial/GCPAuth/Namespaces 0.15
41 TestAddons/serial/GCPAuth/FakeCredentials 7.41
57 TestAddons/StoppedEnableDisable 18.51
58 TestCertOptions 24.8
59 TestCertExpiration 210
61 TestForceSystemdFlag 24.43
62 TestForceSystemdEnv 22.92
67 TestErrorSpam/setup 21.67
68 TestErrorSpam/start 0.63
69 TestErrorSpam/status 0.89
70 TestErrorSpam/pause 6.22
71 TestErrorSpam/unpause 5.15
72 TestErrorSpam/stop 8.05
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 41.44
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 5.94
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.06
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.22
84 TestFunctional/serial/CacheCmd/cache/add_local 1.27
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.5
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.11
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
92 TestFunctional/serial/ExtraConfig 67.29
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.14
95 TestFunctional/serial/LogsFileCmd 1.17
96 TestFunctional/serial/InvalidService 4.06
98 TestFunctional/parallel/ConfigCmd 0.46
99 TestFunctional/parallel/DashboardCmd 8.89
100 TestFunctional/parallel/DryRun 0.39
101 TestFunctional/parallel/InternationalLanguage 0.16
102 TestFunctional/parallel/StatusCmd 0.93
106 TestFunctional/parallel/ServiceCmdConnect 17.7
107 TestFunctional/parallel/AddonsCmd 0.2
108 TestFunctional/parallel/PersistentVolumeClaim 25.37
110 TestFunctional/parallel/SSHCmd 0.61
111 TestFunctional/parallel/CpCmd 1.88
112 TestFunctional/parallel/MySQL 19.62
113 TestFunctional/parallel/FileSync 0.33
114 TestFunctional/parallel/CertSync 1.76
118 TestFunctional/parallel/NodeLabels 0.08
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.64
122 TestFunctional/parallel/License 0.46
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.21
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ServiceCmd/DeployApp 7.13
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
136 TestFunctional/parallel/MountCmd/any-port 5.66
137 TestFunctional/parallel/ProfileCmd/profile_list 0.4
138 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
139 TestFunctional/parallel/ServiceCmd/List 1.76
140 TestFunctional/parallel/MountCmd/specific-port 1.97
141 TestFunctional/parallel/ServiceCmd/JSONOutput 1.76
142 TestFunctional/parallel/Version/short 0.06
143 TestFunctional/parallel/Version/components 0.52
144 TestFunctional/parallel/MountCmd/VerifyCleanup 1.7
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.35
150 TestFunctional/parallel/ImageCommands/Setup 1.17
151 TestFunctional/parallel/ServiceCmd/HTTPS 0.59
152 TestFunctional/parallel/ServiceCmd/Format 0.61
153 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.17
154 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
155 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
156 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
157 TestFunctional/parallel/ServiceCmd/URL 0.56
158 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.92
159 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.2
160 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.33
161 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
162 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.55
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
164 TestFunctional/delete_echo-server_images 0.03
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 65.32
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 6.09
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.04
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.59
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.21
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.29
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.46
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.12
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 59.59
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.06
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.17
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.21
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.91
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.46
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 10.34
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.77
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.23
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 1.2
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 7.78
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.17
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 23.82
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.67
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.87
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 23.24
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.27
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.86
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.07
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.6
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.47
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.07
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.49
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.24
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.25
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.26
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.26
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 2.29
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.43
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.38
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 8.14
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.88
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 11.22
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.22
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.33
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.51
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.61
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.94
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.5
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.51
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.34
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.34
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.35
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.56
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 12.28
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.5
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.53
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.18
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.19
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.16
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.75
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.87
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.03
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 115.82
266 TestMultiControlPlane/serial/DeployApp 4.03
267 TestMultiControlPlane/serial/PingHostFromPods 1.11
268 TestMultiControlPlane/serial/AddWorkerNode 23.34
269 TestMultiControlPlane/serial/NodeLabels 0.06
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
271 TestMultiControlPlane/serial/CopyFile 16.97
272 TestMultiControlPlane/serial/StopSecondaryNode 14.22
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
274 TestMultiControlPlane/serial/RestartSecondaryNode 8.61
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.86
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 103.23
277 TestMultiControlPlane/serial/DeleteSecondaryNode 10.46
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
279 TestMultiControlPlane/serial/StopCluster 48.97
280 TestMultiControlPlane/serial/RestartCluster 58.28
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
282 TestMultiControlPlane/serial/AddSecondaryNode 38.81
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
288 TestJSONOutput/start/Command 38.35
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 6.07
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.23
313 TestKicCustomNetwork/create_custom_network 27.4
314 TestKicCustomNetwork/use_default_bridge_network 20.72
315 TestKicExistingNetwork 22.45
316 TestKicCustomSubnet 25.93
317 TestKicStaticIP 22.59
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 44.85
322 TestMountStart/serial/StartWithMountFirst 7.74
323 TestMountStart/serial/VerifyMountFirst 0.26
324 TestMountStart/serial/StartWithMountSecond 4.56
325 TestMountStart/serial/VerifyMountSecond 0.27
326 TestMountStart/serial/DeleteFirst 1.66
327 TestMountStart/serial/VerifyMountPostDelete 0.26
328 TestMountStart/serial/Stop 1.25
329 TestMountStart/serial/RestartStopped 7.11
330 TestMountStart/serial/VerifyMountPostStop 0.27
333 TestMultiNode/serial/FreshStart2Nodes 58.38
334 TestMultiNode/serial/DeployApp2Nodes 3.45
335 TestMultiNode/serial/PingHostFrom2Pods 0.74
336 TestMultiNode/serial/AddNode 55.31
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.63
339 TestMultiNode/serial/CopyFile 9.66
340 TestMultiNode/serial/StopNode 2.21
341 TestMultiNode/serial/StartAfterStop 6.98
342 TestMultiNode/serial/RestartKeepsNodes 81.81
343 TestMultiNode/serial/DeleteNode 5.06
344 TestMultiNode/serial/StopMultiNode 30.27
345 TestMultiNode/serial/RestartMultiNode 24.19
346 TestMultiNode/serial/ValidateNameConflict 25.95
351 TestPreload 78.93
353 TestScheduledStopUnix 95.23
356 TestInsufficientStorage 8.73
357 TestRunningBinaryUpgrade 50.04
359 TestKubernetesUpgrade 298.55
360 TestMissingContainerUpgrade 99.3
362 TestPause/serial/Start 60.51
363 TestStoppedBinaryUpgrade/Setup 0.62
364 TestStoppedBinaryUpgrade/Upgrade 302.71
365 TestPause/serial/SecondStartNoReconfiguration 8.27
375 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
376 TestNoKubernetes/serial/StartWithK8s 23.99
384 TestNetworkPlugins/group/false 3.41
388 TestNoKubernetes/serial/StartWithStopK8s 15.91
389 TestNoKubernetes/serial/Start 6.76
390 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
391 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
392 TestNoKubernetes/serial/ProfileList 14.73
393 TestNoKubernetes/serial/Stop 1.27
394 TestNoKubernetes/serial/StartNoArgs 7.1
395 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
397 TestStartStop/group/old-k8s-version/serial/FirstStart 49.8
398 TestStartStop/group/old-k8s-version/serial/DeployApp 7.29
400 TestStartStop/group/old-k8s-version/serial/Stop 16.96
401 TestStoppedBinaryUpgrade/MinikubeLogs 1.03
403 TestStartStop/group/no-preload/serial/FirstStart 45.1
404 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
405 TestStartStop/group/old-k8s-version/serial/SecondStart 43.47
407 TestStartStop/group/embed-certs/serial/FirstStart 41.67
408 TestStartStop/group/no-preload/serial/DeployApp 7.34
410 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.42
412 TestStartStop/group/no-preload/serial/Stop 17.75
413 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
414 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
415 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
417 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
418 TestStartStop/group/no-preload/serial/SecondStart 52.02
420 TestStartStop/group/newest-cni/serial/FirstStart 23.5
421 TestStartStop/group/embed-certs/serial/DeployApp 7.29
423 TestStartStop/group/embed-certs/serial/Stop 18.23
424 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.27
425 TestStartStop/group/newest-cni/serial/DeployApp 0
428 TestStartStop/group/newest-cni/serial/Stop 2.76
429 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.17
430 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
431 TestStartStop/group/newest-cni/serial/SecondStart 10.09
432 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
433 TestStartStop/group/embed-certs/serial/SecondStart 47.69
434 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
435 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
436 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
438 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
439 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.97
440 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
441 TestNetworkPlugins/group/auto/Start 71.61
442 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
443 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
445 TestNetworkPlugins/group/kindnet/Start 37.75
446 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
447 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
448 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
450 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
451 TestNetworkPlugins/group/calico/Start 49.18
452 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
453 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
454 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
456 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
457 TestNetworkPlugins/group/kindnet/NetCatPod 10.2
458 TestNetworkPlugins/group/custom-flannel/Start 56.57
459 TestNetworkPlugins/group/auto/KubeletFlags 0.34
460 TestNetworkPlugins/group/auto/NetCatPod 10.2
461 TestNetworkPlugins/group/kindnet/DNS 0.13
462 TestNetworkPlugins/group/kindnet/Localhost 0.1
463 TestNetworkPlugins/group/kindnet/HairPin 0.09
464 TestNetworkPlugins/group/auto/DNS 0.16
465 TestNetworkPlugins/group/auto/Localhost 0.11
466 TestNetworkPlugins/group/auto/HairPin 0.13
467 TestNetworkPlugins/group/enable-default-cni/Start 67.97
468 TestNetworkPlugins/group/calico/ControllerPod 6.01
469 TestNetworkPlugins/group/flannel/Start 51.55
470 TestNetworkPlugins/group/calico/KubeletFlags 0.41
471 TestNetworkPlugins/group/calico/NetCatPod 9.67
472 TestNetworkPlugins/group/calico/DNS 0.12
473 TestNetworkPlugins/group/calico/Localhost 0.11
474 TestNetworkPlugins/group/calico/HairPin 0.09
475 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
476 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.23
477 TestNetworkPlugins/group/custom-flannel/DNS 0.13
478 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
479 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
480 TestNetworkPlugins/group/bridge/Start 63.2
481 TestNetworkPlugins/group/flannel/ControllerPod 6
482 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
483 TestNetworkPlugins/group/flannel/NetCatPod 9.16
484 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
485 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.27
486 TestNetworkPlugins/group/flannel/DNS 0.12
487 TestNetworkPlugins/group/flannel/Localhost 0.08
488 TestNetworkPlugins/group/flannel/HairPin 0.08
489 TestNetworkPlugins/group/enable-default-cni/DNS 0.1
490 TestNetworkPlugins/group/enable-default-cni/Localhost 0.08
491 TestNetworkPlugins/group/enable-default-cni/HairPin 0.08
492 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
493 TestNetworkPlugins/group/bridge/NetCatPod 8.17
494 TestNetworkPlugins/group/bridge/DNS 0.1
495 TestNetworkPlugins/group/bridge/Localhost 0.08
496 TestNetworkPlugins/group/bridge/HairPin 0.08
x
+
TestDownloadOnly/v1.28.0/json-events (4.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-717461 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-717461 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.175700623s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1217 00:04:34.821926   16354 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1217 00:04:34.822014   16354 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-717461
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-717461: exit status 85 (67.353143ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-717461 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-717461 │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:04:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:04:30.698816   16366 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:04:30.698920   16366 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:04:30.698930   16366 out.go:374] Setting ErrFile to fd 2...
	I1217 00:04:30.698934   16366 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:04:30.699146   16366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	W1217 00:04:30.699264   16366 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22168-12816/.minikube/config/config.json: open /home/jenkins/minikube-integration/22168-12816/.minikube/config/config.json: no such file or directory
	I1217 00:04:30.699755   16366 out.go:368] Setting JSON to true
	I1217 00:04:30.700650   16366 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2821,"bootTime":1765927050,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:04:30.700704   16366 start.go:143] virtualization: kvm guest
	I1217 00:04:30.705408   16366 out.go:99] [download-only-717461] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1217 00:04:30.705544   16366 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball: no such file or directory
	I1217 00:04:30.705588   16366 notify.go:221] Checking for updates...
	I1217 00:04:30.706660   16366 out.go:171] MINIKUBE_LOCATION=22168
	I1217 00:04:30.708373   16366 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:04:30.709524   16366 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:04:30.710641   16366 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:04:30.711774   16366 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 00:04:30.714094   16366 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 00:04:30.714356   16366 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:04:30.738368   16366 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:04:30.738471   16366 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:04:30.959550   16366 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-17 00:04:30.950500712 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:04:30.959643   16366 docker.go:319] overlay module found
	I1217 00:04:30.961146   16366 out.go:99] Using the docker driver based on user configuration
	I1217 00:04:30.961167   16366 start.go:309] selected driver: docker
	I1217 00:04:30.961172   16366 start.go:927] validating driver "docker" against <nil>
	I1217 00:04:30.961265   16366 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:04:31.018461   16366 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-17 00:04:31.007972687 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:04:31.018620   16366 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:04:31.019154   16366 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1217 00:04:31.019334   16366 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 00:04:31.021015   16366 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-717461 host does not exist
	  To start a cluster, run: "minikube start -p download-only-717461"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-717461
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (3.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-618348 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-618348 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.521995595s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (3.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1217 00:04:38.760037   16354 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1217 00:04:38.760077   16354 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-618348
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-618348: exit status 85 (67.572828ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-717461 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-717461 │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │ 17 Dec 25 00:04 UTC │
	│ delete  │ -p download-only-717461                                                                                                                                                   │ download-only-717461 │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │ 17 Dec 25 00:04 UTC │
	│ start   │ -o=json --download-only -p download-only-618348 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-618348 │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:04:35
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:04:35.288918   16725 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:04:35.289153   16725 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:04:35.289162   16725 out.go:374] Setting ErrFile to fd 2...
	I1217 00:04:35.289166   16725 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:04:35.289361   16725 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:04:35.289785   16725 out.go:368] Setting JSON to true
	I1217 00:04:35.290565   16725 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2825,"bootTime":1765927050,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:04:35.290614   16725 start.go:143] virtualization: kvm guest
	I1217 00:04:35.292360   16725 out.go:99] [download-only-618348] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:04:35.292520   16725 notify.go:221] Checking for updates...
	I1217 00:04:35.293894   16725 out.go:171] MINIKUBE_LOCATION=22168
	I1217 00:04:35.295264   16725 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:04:35.296555   16725 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:04:35.297882   16725 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:04:35.299098   16725 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 00:04:35.301145   16725 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 00:04:35.301343   16725 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:04:35.323893   16725 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:04:35.324012   16725 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:04:35.378861   16725 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-17 00:04:35.369201696 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:04:35.379030   16725 docker.go:319] overlay module found
	I1217 00:04:35.380580   16725 out.go:99] Using the docker driver based on user configuration
	I1217 00:04:35.380600   16725 start.go:309] selected driver: docker
	I1217 00:04:35.380605   16725 start.go:927] validating driver "docker" against <nil>
	I1217 00:04:35.380697   16725 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:04:35.436493   16725 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-17 00:04:35.427611768 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:04:35.436626   16725 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:04:35.437136   16725 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1217 00:04:35.437268   16725 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 00:04:35.439023   16725 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-618348 host does not exist
	  To start a cluster, run: "minikube start -p download-only-618348"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-618348
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-928929 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-928929 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.117701807s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1217 00:04:42.290069   16354 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1217 00:04:42.290111   16354 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-928929
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-928929: exit status 85 (68.029618ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-717461 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-717461 │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │ 17 Dec 25 00:04 UTC │
	│ delete  │ -p download-only-717461                                                                                                                                                          │ download-only-717461 │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │ 17 Dec 25 00:04 UTC │
	│ start   │ -o=json --download-only -p download-only-618348 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-618348 │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │ 17 Dec 25 00:04 UTC │
	│ delete  │ -p download-only-618348                                                                                                                                                          │ download-only-618348 │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │ 17 Dec 25 00:04 UTC │
	│ start   │ -o=json --download-only -p download-only-928929 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-928929 │ jenkins │ v1.37.0 │ 17 Dec 25 00:04 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:04:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:04:39.223198   17062 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:04:39.223277   17062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:04:39.223281   17062 out.go:374] Setting ErrFile to fd 2...
	I1217 00:04:39.223286   17062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:04:39.223476   17062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:04:39.223886   17062 out.go:368] Setting JSON to true
	I1217 00:04:39.224716   17062 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2829,"bootTime":1765927050,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:04:39.224761   17062 start.go:143] virtualization: kvm guest
	I1217 00:04:39.226537   17062 out.go:99] [download-only-928929] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:04:39.226643   17062 notify.go:221] Checking for updates...
	I1217 00:04:39.227974   17062 out.go:171] MINIKUBE_LOCATION=22168
	I1217 00:04:39.229247   17062 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:04:39.230403   17062 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:04:39.231488   17062 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:04:39.232663   17062 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 00:04:39.234710   17062 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 00:04:39.234885   17062 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:04:39.257715   17062 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:04:39.257823   17062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:04:39.309688   17062 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-17 00:04:39.30066471 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:04:39.309773   17062 docker.go:319] overlay module found
	I1217 00:04:39.311590   17062 out.go:99] Using the docker driver based on user configuration
	I1217 00:04:39.311622   17062 start.go:309] selected driver: docker
	I1217 00:04:39.311628   17062 start.go:927] validating driver "docker" against <nil>
	I1217 00:04:39.311705   17062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:04:39.364649   17062 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-17 00:04:39.355721075 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:04:39.364792   17062 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:04:39.365341   17062 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1217 00:04:39.365486   17062 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 00:04:39.366978   17062 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-928929 host does not exist
	  To start a cluster, run: "minikube start -p download-only-928929"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-928929
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.4s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-275762 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "download-docker-275762" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-275762
--- PASS: TestDownloadOnlyKic (0.40s)

                                                
                                    
x
+
TestBinaryMirror (0.8s)

                                                
                                                
=== RUN   TestBinaryMirror
I1217 00:04:43.527121   16354 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-602188 --alsologtostderr --binary-mirror http://127.0.0.1:39411 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-602188" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-602188
--- PASS: TestBinaryMirror (0.80s)

                                                
                                    
x
+
TestOffline (59.26s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-981697 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-981697 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (56.891427391s)
helpers_test.go:176: Cleaning up "offline-crio-981697" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-981697
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-981697: (2.369033484s)
--- PASS: TestOffline (59.26s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-401977
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-401977: exit status 85 (67.741519ms)

                                                
                                                
-- stdout --
	* Profile "addons-401977" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-401977"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-401977
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-401977: exit status 85 (67.796839ms)

                                                
                                                
-- stdout --
	* Profile "addons-401977" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-401977"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (121.53s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-401977 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-401977 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m1.530529997s)
--- PASS: TestAddons/Setup (121.53s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-401977 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-401977 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.41s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-401977 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-401977 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0d3a07bd-259f-4827-8d61-b6a0453c30dc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0d3a07bd-259f-4827-8d61-b6a0453c30dc] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.003317699s
addons_test.go:696: (dbg) Run:  kubectl --context addons-401977 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-401977 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-401977 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.41s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.51s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-401977
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-401977: (18.240459589s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-401977
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-401977
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-401977
--- PASS: TestAddons/StoppedEnableDisable (18.51s)

                                                
                                    
x
+
TestCertOptions (24.8s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-636512 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-636512 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (21.756676935s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-636512 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-636512 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-636512 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-636512" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-636512
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-636512: (2.40640302s)
--- PASS: TestCertOptions (24.80s)

                                                
                                    
x
+
TestCertExpiration (210s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-753607 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-753607 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (20.918806638s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-753607 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-753607 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.518031347s)
helpers_test.go:176: Cleaning up "cert-expiration-753607" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-753607
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-753607: (3.558666106s)
--- PASS: TestCertExpiration (210.00s)

                                                
                                    
x
+
TestForceSystemdFlag (24.43s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-452634 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-452634 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.693954402s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-452634 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-452634" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-452634
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-452634: (2.450148617s)
--- PASS: TestForceSystemdFlag (24.43s)

                                                
                                    
x
+
TestForceSystemdEnv (22.92s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-958106 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-958106 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (20.308242251s)
helpers_test.go:176: Cleaning up "force-systemd-env-958106" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-958106
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-958106: (2.609923851s)
--- PASS: TestForceSystemdEnv (22.92s)

                                                
                                    
x
+
TestErrorSpam/setup (21.67s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-546151 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-546151 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-546151 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-546151 --driver=docker  --container-runtime=crio: (21.670414022s)
--- PASS: TestErrorSpam/setup (21.67s)

                                                
                                    
x
+
TestErrorSpam/start (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 start --dry-run
--- PASS: TestErrorSpam/start (0.63s)

                                                
                                    
x
+
TestErrorSpam/status (0.89s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 status
--- PASS: TestErrorSpam/status (0.89s)

                                                
                                    
x
+
TestErrorSpam/pause (6.22s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 pause: exit status 80 (2.233629377s)

                                                
                                                
-- stdout --
	* Pausing node nospam-546151 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:10:12Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 pause: exit status 80 (1.99365954s)

                                                
                                                
-- stdout --
	* Pausing node nospam-546151 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:10:14Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 pause: exit status 80 (1.990083934s)

                                                
                                                
-- stdout --
	* Pausing node nospam-546151 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:10:16Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.22s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.15s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 unpause: exit status 80 (1.805890448s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-546151 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:10:18Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 unpause: exit status 80 (1.809217533s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-546151 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:10:20Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 unpause: exit status 80 (1.532781015s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-546151 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:10:22Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.15s)

                                                
                                    
x
+
TestErrorSpam/stop (8.05s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 stop: (7.859373702s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-546151 --log_dir /tmp/nospam-546151 stop
--- PASS: TestErrorSpam/stop (8.05s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/test/nested/copy/16354/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.44s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-396394 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-396394 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (41.443150545s)
--- PASS: TestFunctional/serial/StartWithProxy (41.44s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.94s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1217 00:11:16.749526   16354 config.go:182] Loaded profile config "functional-396394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-396394 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-396394 --alsologtostderr -v=8: (5.941032966s)
functional_test.go:678: soft start took 5.942882707s for "functional-396394" cluster.
I1217 00:11:22.691032   16354 config.go:182] Loaded profile config "functional-396394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (5.94s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-396394 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-396394 cache add registry.k8s.io/pause:latest: (1.484567017s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-396394 /tmp/TestFunctionalserialCacheCmdcacheadd_local3974117345/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 cache add minikube-local-cache-test:functional-396394
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 cache delete minikube-local-cache-test:functional-396394
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-396394
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-396394 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (269.581535ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 kubectl -- --context functional-396394 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-396394 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (67.29s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-396394 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1217 00:11:46.783486   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:11:46.789891   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:11:46.801239   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:11:46.822598   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:11:46.863931   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:11:46.945352   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:11:47.106872   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:11:47.428529   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:11:48.070518   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:11:49.351836   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:11:51.914675   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:11:57.036196   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:12:07.277827   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:12:27.759210   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-396394 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m7.289219572s)
functional_test.go:776: restart took 1m7.289342041s for "functional-396394" cluster.
I1217 00:12:36.814539   16354 config.go:182] Loaded profile config "functional-396394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (67.29s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-396394 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-396394 logs: (1.141072576s)
--- PASS: TestFunctional/serial/LogsCmd (1.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 logs --file /tmp/TestFunctionalserialLogsFileCmd2323235782/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-396394 logs --file /tmp/TestFunctionalserialLogsFileCmd2323235782/001/logs.txt: (1.172151389s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.06s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-396394 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-396394
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-396394: exit status 115 (327.569984ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30128 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-396394 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-396394 config get cpus: exit status 14 (89.363691ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-396394 config get cpus: exit status 14 (78.346467ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-396394 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-396394 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 52638: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.89s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-396394 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-396394 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (153.583239ms)

                                                
                                                
-- stdout --
	* [functional-396394] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:13:05.247420   52188 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:13:05.247624   52188 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:13:05.247632   52188 out.go:374] Setting ErrFile to fd 2...
	I1217 00:13:05.247636   52188 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:13:05.247817   52188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:13:05.248210   52188 out.go:368] Setting JSON to false
	I1217 00:13:05.249121   52188 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3335,"bootTime":1765927050,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:13:05.249179   52188 start.go:143] virtualization: kvm guest
	I1217 00:13:05.250836   52188 out.go:179] * [functional-396394] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:13:05.252277   52188 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:13:05.252332   52188 notify.go:221] Checking for updates...
	I1217 00:13:05.254710   52188 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:13:05.255871   52188 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:13:05.257065   52188 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:13:05.258270   52188 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:13:05.259399   52188 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:13:05.260978   52188 config.go:182] Loaded profile config "functional-396394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:13:05.261656   52188 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:13:05.284324   52188 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:13:05.284404   52188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:13:05.338789   52188 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-17 00:13:05.329805817 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:13:05.338939   52188 docker.go:319] overlay module found
	I1217 00:13:05.340651   52188 out.go:179] * Using the docker driver based on existing profile
	I1217 00:13:05.341934   52188 start.go:309] selected driver: docker
	I1217 00:13:05.341946   52188 start.go:927] validating driver "docker" against &{Name:functional-396394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-396394 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:13:05.342053   52188 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:13:05.343656   52188 out.go:203] 
	W1217 00:13:05.344911   52188 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 00:13:05.345960   52188 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-396394 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-396394 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-396394 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (156.018707ms)

                                                
                                                
-- stdout --
	* [functional-396394] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:13:05.093225   52102 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:13:05.093505   52102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:13:05.093516   52102 out.go:374] Setting ErrFile to fd 2...
	I1217 00:13:05.093519   52102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:13:05.093800   52102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:13:05.094224   52102 out.go:368] Setting JSON to false
	I1217 00:13:05.095187   52102 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3335,"bootTime":1765927050,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:13:05.095238   52102 start.go:143] virtualization: kvm guest
	I1217 00:13:05.097887   52102 out.go:179] * [functional-396394] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 00:13:05.099278   52102 notify.go:221] Checking for updates...
	I1217 00:13:05.099337   52102 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:13:05.100602   52102 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:13:05.101916   52102 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:13:05.103233   52102 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:13:05.104406   52102 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:13:05.105569   52102 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:13:05.107075   52102 config.go:182] Loaded profile config "functional-396394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:13:05.107576   52102 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:13:05.130960   52102 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:13:05.131052   52102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:13:05.185094   52102 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-17 00:13:05.175087706 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:13:05.185191   52102 docker.go:319] overlay module found
	I1217 00:13:05.186859   52102 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1217 00:13:05.188072   52102 start.go:309] selected driver: docker
	I1217 00:13:05.188088   52102 start.go:927] validating driver "docker" against &{Name:functional-396394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-396394 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:13:05.188165   52102 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:13:05.189753   52102 out.go:203] 
	W1217 00:13:05.190922   52102 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 00:13:05.192047   52102 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (17.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-396394 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-396394 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-gzjjv" [3b78ebe1-a527-4073-8daa-bf660c4b96be] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-gzjjv" [3b78ebe1-a527-4073-8daa-bf660c4b96be] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 17.003469185s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32413
functional_test.go:1680: http://192.168.49.2:32413: success! body:
Request served by hello-node-connect-7d85dfc575-gzjjv

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32413
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (17.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [817757cc-54be-4abb-a4c7-906c3e51d5ba] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003192322s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-396394 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-396394 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-396394 get pvc myclaim -o=json
I1217 00:12:50.574826   16354 retry.go:31] will retry after 1.909863682s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:3f8f4450-e195-4f9e-aa3c-6f91b28fc203 ResourceVersion:625 Generation:0 CreationTimestamp:2025-12-17 00:12:50 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0014ad6b0 VolumeMode:0xc0014ad6c0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-396394 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-396394 apply -f testdata/storage-provisioner/pod.yaml
I1217 00:12:52.653482   16354 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [7bfa220c-85c7-4cfd-a0a0-985c2d30704e] Pending
helpers_test.go:353: "sp-pod" [7bfa220c-85c7-4cfd-a0a0-985c2d30704e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [7bfa220c-85c7-4cfd-a0a0-985c2d30704e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.00269288s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-396394 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-396394 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-396394 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [5b0da02e-2b2c-49f0-9fff-75758da36172] Pending
helpers_test.go:353: "sp-pod" [5b0da02e-2b2c-49f0-9fff-75758da36172] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003683034s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-396394 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.37s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh -n functional-396394 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 cp functional-396394:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd927392969/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh -n functional-396394 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh -n functional-396394 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (19.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-396394 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-sc8h7" [f3194085-3287-4eda-a0c3-83e1207c78a2] Pending
helpers_test.go:353: "mysql-6bcdcbc558-sc8h7" [f3194085-3287-4eda-a0c3-83e1207c78a2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-sc8h7" [f3194085-3287-4eda-a0c3-83e1207c78a2] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 13.129750696s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-396394 exec mysql-6bcdcbc558-sc8h7 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-396394 exec mysql-6bcdcbc558-sc8h7 -- mysql -ppassword -e "show databases;": exit status 1 (166.309039ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:12:56.771039   16354 retry.go:31] will retry after 1.071906846s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-396394 exec mysql-6bcdcbc558-sc8h7 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-396394 exec mysql-6bcdcbc558-sc8h7 -- mysql -ppassword -e "show databases;": exit status 1 (131.750477ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:12:57.975062   16354 retry.go:31] will retry after 1.914070125s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-396394 exec mysql-6bcdcbc558-sc8h7 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-396394 exec mysql-6bcdcbc558-sc8h7 -- mysql -ppassword -e "show databases;": exit status 1 (85.572998ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:12:59.975869   16354 retry.go:31] will retry after 2.780376883s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-396394 exec mysql-6bcdcbc558-sc8h7 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (19.62s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/16354/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh "sudo cat /etc/test/nested/copy/16354/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/16354.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh "sudo cat /etc/ssl/certs/16354.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/16354.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh "sudo cat /usr/share/ca-certificates/16354.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/163542.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh "sudo cat /etc/ssl/certs/163542.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/163542.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh "sudo cat /usr/share/ca-certificates/163542.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-396394 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-396394 ssh "sudo systemctl is-active docker": exit status 1 (311.058764ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-396394 ssh "sudo systemctl is-active containerd": exit status 1 (330.75717ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-396394 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-396394 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-396394 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-396394 tunnel --alsologtostderr] ...
helpers_test.go:520: unable to terminate pid 48931: os: process already finished
helpers_test.go:526: unable to kill pid 48540: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-396394 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-396394 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [a8bdc1e1-6a41-42cf-9342-3bd63e8a02f1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [a8bdc1e1-6a41-42cf-9342-3bd63e8a02f1] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.004001494s
I1217 00:13:00.354496   16354 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-396394 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.152.99 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-396394 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-396394 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-396394 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-2lwgv" [0e70765b-b634-4357-8a27-005133797439] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-2lwgv" [0e70765b-b634-4357-8a27-005133797439] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003778464s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-396394 /tmp/TestFunctionalparallelMountCmdany-port2753554329/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765930383038603493" to /tmp/TestFunctionalparallelMountCmdany-port2753554329/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765930383038603493" to /tmp/TestFunctionalparallelMountCmdany-port2753554329/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765930383038603493" to /tmp/TestFunctionalparallelMountCmdany-port2753554329/001/test-1765930383038603493
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-396394 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (281.878925ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 00:13:03.320819   16354 retry.go:31] will retry after 355.511161ms: exit status 1
I1217 00:13:03.447799   16354 detect.go:223] nested VM detected
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 00:13 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 00:13 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 00:13 test-1765930383038603493
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh cat /mount-9p/test-1765930383038603493
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-396394 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [8ef1ff6d-ecb0-409f-822a-b55ec7e2966f] Pending
helpers_test.go:353: "busybox-mount" [8ef1ff6d-ecb0-409f-822a-b55ec7e2966f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [8ef1ff6d-ecb0-409f-822a-b55ec7e2966f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [8ef1ff6d-ecb0-409f-822a-b55ec7e2966f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003162594s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-396394 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-396394 /tmp/TestFunctionalparallelMountCmdany-port2753554329/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.66s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "337.463367ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "61.018399ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "347.287947ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "68.308356ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-396394 service list: (1.760332325s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-396394 /tmp/TestFunctionalparallelMountCmdspecific-port3031245634/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh "findmnt -T /mount-9p | grep 9p"
E1217 00:13:08.721094   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-396394 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (287.692165ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 00:13:08.984514   16354 retry.go:31] will retry after 527.694734ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-396394 /tmp/TestFunctionalparallelMountCmdspecific-port3031245634/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-396394 ssh "sudo umount -f /mount-9p": exit status 1 (306.715822ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-396394 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-396394 /tmp/TestFunctionalparallelMountCmdspecific-port3031245634/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-396394 service list -o json: (1.764324363s)
functional_test.go:1504: Took "1.764448412s" to run "out/minikube-linux-amd64 -p functional-396394 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-396394 /tmp/TestFunctionalparallelMountCmdVerifyCleanup651752778/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-396394 /tmp/TestFunctionalparallelMountCmdVerifyCleanup651752778/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-396394 /tmp/TestFunctionalparallelMountCmdVerifyCleanup651752778/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-396394 ssh "findmnt -T" /mount1: exit status 1 (385.682863ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 00:13:11.058182   16354 retry.go:31] will retry after 271.414747ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-396394 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-396394 /tmp/TestFunctionalparallelMountCmdVerifyCleanup651752778/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-396394 /tmp/TestFunctionalparallelMountCmdVerifyCleanup651752778/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-396394 /tmp/TestFunctionalparallelMountCmdVerifyCleanup651752778/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-396394 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-396394
localhost/kicbase/echo-server:functional-396394
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-396394 image ls --format short --alsologtostderr:
I1217 00:13:16.998082   57096 out.go:360] Setting OutFile to fd 1 ...
I1217 00:13:16.998410   57096 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:13:16.998448   57096 out.go:374] Setting ErrFile to fd 2...
I1217 00:13:16.998467   57096 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:13:16.998810   57096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
I1217 00:13:16.999546   57096 config.go:182] Loaded profile config "functional-396394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:13:16.999752   57096 config.go:182] Loaded profile config "functional-396394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:13:17.000460   57096 cli_runner.go:164] Run: docker container inspect functional-396394 --format={{.State.Status}}
I1217 00:13:17.022571   57096 ssh_runner.go:195] Run: systemctl --version
I1217 00:13:17.022624   57096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-396394
I1217 00:13:17.045689   57096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/functional-396394/id_rsa Username:docker}
I1217 00:13:17.142977   57096 ssh_runner.go:195] Run: sudo crictl images --output json
W1217 00:13:17.174528   57096 root.go:91] failed to log command end to audit: failed to find a log row with id equals to b3684087-1764-428c-a90a-76fb5e7ddc2a
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-396394 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-396394  │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ localhost/minikube-local-cache-test     │ functional-396394  │ 7b94a9b95aba8 │ 3.33kB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-396394 image ls --format table --alsologtostderr:
I1217 00:13:17.236551   57337 out.go:360] Setting OutFile to fd 1 ...
I1217 00:13:17.236866   57337 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:13:17.236880   57337 out.go:374] Setting ErrFile to fd 2...
I1217 00:13:17.236887   57337 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:13:17.237180   57337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
I1217 00:13:17.237798   57337 config.go:182] Loaded profile config "functional-396394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:13:17.237884   57337 config.go:182] Loaded profile config "functional-396394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:13:17.238365   57337 cli_runner.go:164] Run: docker container inspect functional-396394 --format={{.State.Status}}
I1217 00:13:17.258664   57337 ssh_runner.go:195] Run: systemctl --version
I1217 00:13:17.258716   57337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-396394
I1217 00:13:17.277856   57337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/functional-396394/id_rsa Username:docker}
I1217 00:13:17.370411   57337 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-396394 image ls --format json --alsologtostderr:
[{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"6e38f40d628db3002f
5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2
f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":[
"registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524d
d285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8
b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d
3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-396394"],"size":"4944818"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"7b94a9b95aba814c7662e5fc3a01481c443017ced2cef6234b0f38c9ef57254b","repoDigests":["localhost/minikube-local-cache-test@sha256:e7ac453d2500cd32714f2bd8cc908e22c15e9b51cc5d31827e65ae195a98e375"],"repoTags":["localhost/minikube-local-cache-test:functional-396394"],"size":"3330"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-396394 image ls --format json --alsologtostderr:
I1217 00:13:16.988533   57095 out.go:360] Setting OutFile to fd 1 ...
I1217 00:13:16.990698   57095 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:13:16.990717   57095 out.go:374] Setting ErrFile to fd 2...
I1217 00:13:16.990724   57095 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:13:16.991189   57095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
I1217 00:13:16.992565   57095 config.go:182] Loaded profile config "functional-396394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:13:16.992770   57095 config.go:182] Loaded profile config "functional-396394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:13:16.993462   57095 cli_runner.go:164] Run: docker container inspect functional-396394 --format={{.State.Status}}
I1217 00:13:17.016434   57095 ssh_runner.go:195] Run: systemctl --version
I1217 00:13:17.016487   57095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-396394
I1217 00:13:17.036560   57095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/functional-396394/id_rsa Username:docker}
I1217 00:13:17.137648   57095 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-396394 image ls --format yaml --alsologtostderr:
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-396394
size: "4944818"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 7b94a9b95aba814c7662e5fc3a01481c443017ced2cef6234b0f38c9ef57254b
repoDigests:
- localhost/minikube-local-cache-test@sha256:e7ac453d2500cd32714f2bd8cc908e22c15e9b51cc5d31827e65ae195a98e375
repoTags:
- localhost/minikube-local-cache-test:functional-396394
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-396394 image ls --format yaml --alsologtostderr:
I1217 00:13:16.997117   57093 out.go:360] Setting OutFile to fd 1 ...
I1217 00:13:16.997415   57093 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:13:16.997458   57093 out.go:374] Setting ErrFile to fd 2...
I1217 00:13:16.997474   57093 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:13:16.998019   57093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
I1217 00:13:16.998609   57093 config.go:182] Loaded profile config "functional-396394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:13:16.998723   57093 config.go:182] Loaded profile config "functional-396394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:13:16.999157   57093 cli_runner.go:164] Run: docker container inspect functional-396394 --format={{.State.Status}}
I1217 00:13:17.021161   57093 ssh_runner.go:195] Run: systemctl --version
I1217 00:13:17.021225   57093 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-396394
I1217 00:13:17.045195   57093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/functional-396394/id_rsa Username:docker}
I1217 00:13:17.140569   57093 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-396394 ssh pgrep buildkitd: exit status 1 (283.495383ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 image build -t localhost/my-image:functional-396394 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-396394 image build -t localhost/my-image:functional-396394 testdata/build --alsologtostderr: (1.851539213s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-396394 image build -t localhost/my-image:functional-396394 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1dcb241fec6
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-396394
--> 19bf5224952
Successfully tagged localhost/my-image:functional-396394
19bf5224952f8670238f4652a417407f7453c67f45c39f85eb065c0717edbc12
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-396394 image build -t localhost/my-image:functional-396394 testdata/build --alsologtostderr:
I1217 00:13:17.269084   57348 out.go:360] Setting OutFile to fd 1 ...
I1217 00:13:17.269219   57348 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:13:17.269228   57348 out.go:374] Setting ErrFile to fd 2...
I1217 00:13:17.269233   57348 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:13:17.269418   57348 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
I1217 00:13:17.269957   57348 config.go:182] Loaded profile config "functional-396394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:13:17.270649   57348 config.go:182] Loaded profile config "functional-396394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:13:17.271156   57348 cli_runner.go:164] Run: docker container inspect functional-396394 --format={{.State.Status}}
I1217 00:13:17.289328   57348 ssh_runner.go:195] Run: systemctl --version
I1217 00:13:17.289373   57348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-396394
I1217 00:13:17.308191   57348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/functional-396394/id_rsa Username:docker}
I1217 00:13:17.401624   57348 build_images.go:162] Building image from path: /tmp/build.2530323916.tar
I1217 00:13:17.401684   57348 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 00:13:17.410429   57348 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2530323916.tar
I1217 00:13:17.414834   57348 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2530323916.tar: stat -c "%s %y" /var/lib/minikube/build/build.2530323916.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2530323916.tar': No such file or directory
I1217 00:13:17.414857   57348 ssh_runner.go:362] scp /tmp/build.2530323916.tar --> /var/lib/minikube/build/build.2530323916.tar (3072 bytes)
I1217 00:13:17.431793   57348 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2530323916
I1217 00:13:17.439217   57348 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2530323916 -xf /var/lib/minikube/build/build.2530323916.tar
I1217 00:13:17.446713   57348 crio.go:315] Building image: /var/lib/minikube/build/build.2530323916
I1217 00:13:17.446814   57348 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-396394 /var/lib/minikube/build/build.2530323916 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1217 00:13:19.040623   57348 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-396394 /var/lib/minikube/build/build.2530323916 --cgroup-manager=cgroupfs: (1.593778662s)
I1217 00:13:19.040711   57348 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2530323916
I1217 00:13:19.048374   57348 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2530323916.tar
I1217 00:13:19.055624   57348 build_images.go:218] Built localhost/my-image:functional-396394 from /tmp/build.2530323916.tar
I1217 00:13:19.055647   57348 build_images.go:134] succeeded building to: functional-396394
I1217 00:13:19.055652   57348 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.150955356s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-396394
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30796
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 image load --daemon kicbase/echo-server:functional-396394 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30796
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 image load --daemon kicbase/echo-server:functional-396394 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
2025/12/17 00:13:14 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-396394
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 image load --daemon kicbase/echo-server:functional-396394 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 image save kicbase/echo-server:functional-396394 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 image rm kicbase/echo-server:functional-396394 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-396394
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-396394 image save --daemon kicbase/echo-server:functional-396394 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-396394
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-396394
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-396394
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-396394
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22168-12816/.minikube/files/etc/test/nested/copy/16354/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (65.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-484344 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-484344 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m5.316818373s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (65.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1217 00:14:27.672802   16354 config.go:182] Loaded profile config "functional-484344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-484344 --alsologtostderr -v=8
E1217 00:14:30.642781   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-484344 --alsologtostderr -v=8: (6.091071726s)
functional_test.go:678: soft start took 6.09143781s for "functional-484344" cluster.
I1217 00:14:33.764235   16354 config.go:182] Loaded profile config "functional-484344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-484344 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-484344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach1701361070/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 cache add minikube-local-cache-test:functional-484344
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 cache delete minikube-local-cache-test:functional-484344
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-484344
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-484344 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (271.651798ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 kubectl -- --context functional-484344 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-484344 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (59.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-484344 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-484344 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (59.585315397s)
functional_test.go:776: restart took 59.585465754s for "functional-484344" cluster.
I1217 00:15:39.477440   16354 config.go:182] Loaded profile config "functional-484344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (59.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-484344 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-484344 logs: (1.167391833s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs647694319/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-484344 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs647694319/001/logs.txt: (1.20769672s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-484344 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-484344
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-484344: exit status 115 (333.850418ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32352 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-484344 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-484344 delete -f testdata/invalidsvc.yaml: (1.412017528s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-484344 config get cpus: exit status 14 (82.83634ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-484344 config get cpus: exit status 14 (91.347004ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (10.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-484344 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-484344 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 73115: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (10.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-484344 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-484344 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (489.866109ms)

                                                
                                                
-- stdout --
	* [functional-484344] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:16:03.212020   72389 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:16:03.212171   72389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:16:03.212183   72389 out.go:374] Setting ErrFile to fd 2...
	I1217 00:16:03.212190   72389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:16:03.212471   72389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:16:03.213103   72389 out.go:368] Setting JSON to false
	I1217 00:16:03.214575   72389 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3513,"bootTime":1765927050,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:16:03.214660   72389 start.go:143] virtualization: kvm guest
	I1217 00:16:03.263523   72389 out.go:179] * [functional-484344] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:16:03.276725   72389 notify.go:221] Checking for updates...
	I1217 00:16:03.276744   72389 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:16:03.295774   72389 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:16:03.313200   72389 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:16:03.386689   72389 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:16:03.388561   72389 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:16:03.390292   72389 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:16:03.393453   72389 config.go:182] Loaded profile config "functional-484344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:16:03.394248   72389 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:16:03.421436   72389 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:16:03.421531   72389 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:16:03.492206   72389 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-17 00:16:03.480733738 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:16:03.492338   72389 docker.go:319] overlay module found
	I1217 00:16:03.527311   72389 out.go:179] * Using the docker driver based on existing profile
	I1217 00:16:03.577626   72389 start.go:309] selected driver: docker
	I1217 00:16:03.577649   72389 start.go:927] validating driver "docker" against &{Name:functional-484344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-484344 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:16:03.577842   72389 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:16:03.583280   72389 out.go:203] 
	W1217 00:16:03.615251   72389 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 00:16:03.616342   72389 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-484344 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-484344 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-484344 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (230.092688ms)

                                                
                                                
-- stdout --
	* [functional-484344] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:16:03.983727   72838 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:16:03.983848   72838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:16:03.983876   72838 out.go:374] Setting ErrFile to fd 2...
	I1217 00:16:03.983895   72838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:16:03.984192   72838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:16:03.984633   72838 out.go:368] Setting JSON to false
	I1217 00:16:03.985608   72838 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3514,"bootTime":1765927050,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:16:03.985689   72838 start.go:143] virtualization: kvm guest
	I1217 00:16:03.987382   72838 out.go:179] * [functional-484344] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 00:16:03.988726   72838 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:16:03.988739   72838 notify.go:221] Checking for updates...
	I1217 00:16:03.990974   72838 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:16:03.992334   72838 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:16:03.993494   72838 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:16:03.994567   72838 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:16:03.995656   72838 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:16:03.997335   72838 config.go:182] Loaded profile config "functional-484344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:16:03.998147   72838 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:16:04.032612   72838 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:16:04.032700   72838 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:16:04.118363   72838 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-17 00:16:04.106986886 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:16:04.118493   72838 docker.go:319] overlay module found
	I1217 00:16:04.120662   72838 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1217 00:16:04.121698   72838 start.go:309] selected driver: docker
	I1217 00:16:04.121716   72838 start.go:927] validating driver "docker" against &{Name:functional-484344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-484344 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:16:04.121830   72838 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:16:04.123606   72838 out.go:203] 
	W1217 00:16:04.124963   72838 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 00:16:04.126183   72838 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (7.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-484344 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-484344 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-wbzph" [c98f54b1-6e96-438f-9ecf-a065cbf0500b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-wbzph" [c98f54b1-6e96-438f-9ecf-a065cbf0500b] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004590448s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30528
functional_test.go:1680: http://192.168.49.2:30528: success! body:
Request served by hello-node-connect-9f67c86d4-wbzph

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30528
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (7.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (23.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [43e7f210-c606-4256-a847-008a79e14d9c] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.002301539s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-484344 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-484344 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-484344 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-484344 apply -f testdata/storage-provisioner/pod.yaml
I1217 00:15:54.700973   16354 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [e64959e4-933b-4640-9031-273430ec0262] Pending: PodScheduled:Unschedulable (0/1 nodes are available: persistentvolume "pvc-92ffcfb5-9a5e-4431-a8ab-6b16bb9048d3" not found. not found)
helpers_test.go:353: "sp-pod" [e64959e4-933b-4640-9031-273430ec0262] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [e64959e4-933b-4640-9031-273430ec0262] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003629354s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-484344 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-484344 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-484344 delete -f testdata/storage-provisioner/pod.yaml: (1.050079492s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-484344 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [63947935-e847-4c0e-88aa-cad80e2411a8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [63947935-e847-4c0e-88aa-cad80e2411a8] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.00415874s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-484344 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (23.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh -n functional-484344 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 cp functional-484344:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3219364181/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh -n functional-484344 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh -n functional-484344 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (23.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-484344 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-xvb7s" [baa6cdd2-b9fb-4e75-856f-a12894bd7997] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-xvb7s" [baa6cdd2-b9fb-4e75-856f-a12894bd7997] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 16.004413696s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-484344 exec mysql-7d7b65bc95-xvb7s -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-484344 exec mysql-7d7b65bc95-xvb7s -- mysql -ppassword -e "show databases;": exit status 1 (104.367118ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:16:14.681810   16354 retry.go:31] will retry after 622.116671ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-484344 exec mysql-7d7b65bc95-xvb7s -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-484344 exec mysql-7d7b65bc95-xvb7s -- mysql -ppassword -e "show databases;": exit status 1 (113.594846ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:16:15.418222   16354 retry.go:31] will retry after 1.245112419s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-484344 exec mysql-7d7b65bc95-xvb7s -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-484344 exec mysql-7d7b65bc95-xvb7s -- mysql -ppassword -e "show databases;": exit status 1 (118.91651ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:16:16.783361   16354 retry.go:31] will retry after 2.378012319s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-484344 exec mysql-7d7b65bc95-xvb7s -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-484344 exec mysql-7d7b65bc95-xvb7s -- mysql -ppassword -e "show databases;": exit status 1 (83.366443ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:16:19.245843   16354 retry.go:31] will retry after 2.327484672s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-484344 exec mysql-7d7b65bc95-xvb7s -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (23.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/16354/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh "sudo cat /etc/test/nested/copy/16354/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/16354.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh "sudo cat /etc/ssl/certs/16354.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/16354.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh "sudo cat /usr/share/ca-certificates/16354.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/163542.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh "sudo cat /etc/ssl/certs/163542.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/163542.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh "sudo cat /usr/share/ca-certificates/163542.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-484344 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-484344 ssh "sudo systemctl is-active docker": exit status 1 (294.796213ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-484344 ssh "sudo systemctl is-active containerd": exit status 1 (308.558545ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-484344 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-484344
localhost/kicbase/echo-server:functional-484344
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-484344 image ls --format short --alsologtostderr:
I1217 00:16:13.789559   74107 out.go:360] Setting OutFile to fd 1 ...
I1217 00:16:13.789763   74107 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:16:13.789771   74107 out.go:374] Setting ErrFile to fd 2...
I1217 00:16:13.789775   74107 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:16:13.790067   74107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
I1217 00:16:13.790662   74107 config.go:182] Loaded profile config "functional-484344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 00:16:13.790788   74107 config.go:182] Loaded profile config "functional-484344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 00:16:13.791278   74107 cli_runner.go:164] Run: docker container inspect functional-484344 --format={{.State.Status}}
I1217 00:16:13.810664   74107 ssh_runner.go:195] Run: systemctl --version
I1217 00:16:13.810761   74107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-484344
I1217 00:16:13.828928   74107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/functional-484344/id_rsa Username:docker}
I1217 00:16:13.921310   74107 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-484344 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-484344  │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-484344  │ 7b94a9b95aba8 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-484344 image ls --format table --alsologtostderr:
I1217 00:16:14.801953   74743 out.go:360] Setting OutFile to fd 1 ...
I1217 00:16:14.802239   74743 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:16:14.802250   74743 out.go:374] Setting ErrFile to fd 2...
I1217 00:16:14.802257   74743 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:16:14.802486   74743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
I1217 00:16:14.803084   74743 config.go:182] Loaded profile config "functional-484344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 00:16:14.803213   74743 config.go:182] Loaded profile config "functional-484344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 00:16:14.803638   74743 cli_runner.go:164] Run: docker container inspect functional-484344 --format={{.State.Status}}
I1217 00:16:14.824715   74743 ssh_runner.go:195] Run: systemctl --version
I1217 00:16:14.824761   74743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-484344
I1217 00:16:14.842633   74743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/functional-484344/id_rsa Username:docker}
I1217 00:16:14.932155   74743 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-484344 image ls --format json --alsologtostderr:
[{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/ki
cbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-484344"],"size":"4943877"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mys
ql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644
839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"a
a9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90819569"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71977881"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae4
3aec22ee5b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52747095"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","regis
try.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76872535"},{"id":"7b94a9b95aba814c7662e5fc3a01481c443017ced2cef6234b0f38c9ef57254b","repoDigests":["localhost/minikube-local-cache-test@sha256:e7ac453d2500cd32714f2bd8cc908e22c15e9b51cc5d31827e65ae195a98e375"],"repoTags":["localhost/minikube-local-cache-test:functional-484344"],"size":"3330"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-484344 image ls --format json --alsologtostderr:
I1217 00:16:14.527397   74574 out.go:360] Setting OutFile to fd 1 ...
I1217 00:16:14.527665   74574 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:16:14.527676   74574 out.go:374] Setting ErrFile to fd 2...
I1217 00:16:14.527680   74574 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:16:14.527901   74574 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
I1217 00:16:14.528575   74574 config.go:182] Loaded profile config "functional-484344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 00:16:14.528703   74574 config.go:182] Loaded profile config "functional-484344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 00:16:14.529194   74574 cli_runner.go:164] Run: docker container inspect functional-484344 --format={{.State.Status}}
I1217 00:16:14.548729   74574 ssh_runner.go:195] Run: systemctl --version
I1217 00:16:14.548787   74574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-484344
I1217 00:16:14.570613   74574 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/functional-484344/id_rsa Username:docker}
I1217 00:16:14.668128   74574 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-484344 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-484344
size: "4943877"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 7b94a9b95aba814c7662e5fc3a01481c443017ced2cef6234b0f38c9ef57254b
repoDigests:
- localhost/minikube-local-cache-test@sha256:e7ac453d2500cd32714f2bd8cc908e22c15e9b51cc5d31827e65ae195a98e375
repoTags:
- localhost/minikube-local-cache-test:functional-484344
size: "3330"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-484344 image ls --format yaml --alsologtostderr:
I1217 00:16:14.033169   74212 out.go:360] Setting OutFile to fd 1 ...
I1217 00:16:14.033402   74212 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:16:14.033413   74212 out.go:374] Setting ErrFile to fd 2...
I1217 00:16:14.033419   74212 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:16:14.033607   74212 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
I1217 00:16:14.034217   74212 config.go:182] Loaded profile config "functional-484344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 00:16:14.034321   74212 config.go:182] Loaded profile config "functional-484344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 00:16:14.034731   74212 cli_runner.go:164] Run: docker container inspect functional-484344 --format={{.State.Status}}
I1217 00:16:14.054486   74212 ssh_runner.go:195] Run: systemctl --version
I1217 00:16:14.054576   74212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-484344
I1217 00:16:14.076129   74212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/functional-484344/id_rsa Username:docker}
I1217 00:16:14.172558   74212 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh pgrep buildkitd
2025/12/17 00:16:14 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-484344 ssh pgrep buildkitd: exit status 1 (280.961128ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 image build -t localhost/my-image:functional-484344 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-484344 image build -t localhost/my-image:functional-484344 testdata/build --alsologtostderr: (1.733536711s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-484344 image build -t localhost/my-image:functional-484344 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> bf0398d24f1
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-484344
--> f24a9e4d60d
Successfully tagged localhost/my-image:functional-484344
f24a9e4d60d5a84fa57a44a70bdfd0685f4c8d49dbfdfb7ce9eff4e388c8e12f
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-484344 image build -t localhost/my-image:functional-484344 testdata/build --alsologtostderr:
I1217 00:16:14.570069   74584 out.go:360] Setting OutFile to fd 1 ...
I1217 00:16:14.570175   74584 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:16:14.570185   74584 out.go:374] Setting ErrFile to fd 2...
I1217 00:16:14.570189   74584 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:16:14.570396   74584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
I1217 00:16:14.571118   74584 config.go:182] Loaded profile config "functional-484344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 00:16:14.571769   74584 config.go:182] Loaded profile config "functional-484344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 00:16:14.572275   74584 cli_runner.go:164] Run: docker container inspect functional-484344 --format={{.State.Status}}
I1217 00:16:14.594173   74584 ssh_runner.go:195] Run: systemctl --version
I1217 00:16:14.594213   74584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-484344
I1217 00:16:14.613595   74584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/functional-484344/id_rsa Username:docker}
I1217 00:16:14.710206   74584 build_images.go:162] Building image from path: /tmp/build.357958197.tar
I1217 00:16:14.710297   74584 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 00:16:14.719411   74584 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.357958197.tar
I1217 00:16:14.723534   74584 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.357958197.tar: stat -c "%s %y" /var/lib/minikube/build/build.357958197.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.357958197.tar': No such file or directory
I1217 00:16:14.723566   74584 ssh_runner.go:362] scp /tmp/build.357958197.tar --> /var/lib/minikube/build/build.357958197.tar (3072 bytes)
I1217 00:16:14.746436   74584 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.357958197
I1217 00:16:14.755644   74584 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.357958197 -xf /var/lib/minikube/build/build.357958197.tar
I1217 00:16:14.765813   74584 crio.go:315] Building image: /var/lib/minikube/build/build.357958197
I1217 00:16:14.765879   74584 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-484344 /var/lib/minikube/build/build.357958197 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1217 00:16:16.208478   74584 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-484344 /var/lib/minikube/build/build.357958197 --cgroup-manager=cgroupfs: (1.442577006s)
I1217 00:16:16.208541   74584 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.357958197
I1217 00:16:16.216876   74584 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.357958197.tar
I1217 00:16:16.224218   74584 build_images.go:218] Built localhost/my-image:functional-484344 from /tmp/build.357958197.tar
I1217 00:16:16.224241   74584 build_images.go:134] succeeded building to: functional-484344
I1217 00:16:16.224245   74584 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-484344
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 image load --daemon kicbase/echo-server:functional-484344 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-484344 image load --daemon kicbase/echo-server:functional-484344 --alsologtostderr: (1.129959383s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (8.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-484344 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-484344 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-92v7r" [9ee2612f-ea4c-4ab1-85ed-b31ec224152d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-92v7r" [9ee2612f-ea4c-4ab1-85ed-b31ec224152d] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.002629929s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (8.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-484344 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-484344 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-484344 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-484344 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 68872: os: process already finished
helpers_test.go:520: unable to terminate pid 68541: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 image load --daemon kicbase/echo-server:functional-484344 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-484344 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (11.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-484344 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [8807d398-99fc-4859-b8a5-cf8c83ae1adf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [8807d398-99fc-4859-b8a5-cf8c83ae1adf] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.004074085s
I1217 00:16:00.359336   16354 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (11.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-484344
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 image load --daemon kicbase/echo-server:functional-484344 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 image save kicbase/echo-server:functional-484344 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 image rm kicbase/echo-server:functional-484344 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-484344
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 image save --daemon kicbase/echo-server:functional-484344 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-484344
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 service list -o json
functional_test.go:1504: Took "510.851202ms" to run "out/minikube-linux-amd64 -p functional-484344 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32092
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32092
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-484344 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.235.102 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-484344 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (12.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-484344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo334687834/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765930561745073815" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo334687834/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765930561745073815" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo334687834/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765930561745073815" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo334687834/001/test-1765930561745073815
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-484344 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (357.577543ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 00:16:02.103622   16354 retry.go:31] will retry after 607.117079ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 00:16 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 00:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 00:16 test-1765930561745073815
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh cat /mount-9p/test-1765930561745073815
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-484344 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [f0268033-864d-4651-8283-8c2f33993bb5] Pending
I1217 00:16:04.074741   16354 detect.go:223] nested VM detected
helpers_test.go:353: "busybox-mount" [f0268033-864d-4651-8283-8c2f33993bb5] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [f0268033-864d-4651-8283-8c2f33993bb5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [f0268033-864d-4651-8283-8c2f33993bb5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.003093062s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-484344 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-484344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo334687834/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (12.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "421.550277ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "79.037622ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "448.325723ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "77.660384ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-484344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3716293117/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-484344 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (297.467468ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 00:16:14.321524   16354 retry.go:31] will retry after 303.954284ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-484344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3716293117/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-484344 ssh "sudo umount -f /mount-9p": exit status 1 (299.764338ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-484344 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-484344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3716293117/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-484344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3413078303/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-484344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3413078303/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-484344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3413078303/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-484344 ssh "findmnt -T" /mount1: exit status 1 (355.272692ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 00:16:16.130554   16354 retry.go:31] will retry after 538.915159ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-484344 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-484344 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-484344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3413078303/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-484344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3413078303/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-484344 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3413078303/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-484344
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-484344
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-484344
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (115.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1217 00:16:46.780501   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:17:14.484134   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:17:43.475089   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-396394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:17:43.481472   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-396394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:17:43.492820   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-396394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:17:43.514115   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-396394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:17:43.555446   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-396394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:17:43.636807   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-396394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:17:43.798181   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-396394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:17:44.119872   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-396394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:17:44.761528   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-396394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:17:46.043135   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-396394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:17:48.605219   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-396394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:17:53.726481   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-396394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:18:03.968165   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-396394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-557476 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m55.114448712s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (115.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-557476 kubectl -- rollout status deployment/busybox: (1.902907828s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 kubectl -- exec busybox-7b57f96db7-dmhqv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 kubectl -- exec busybox-7b57f96db7-nw7mt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 kubectl -- exec busybox-7b57f96db7-nzzt8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 kubectl -- exec busybox-7b57f96db7-dmhqv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 kubectl -- exec busybox-7b57f96db7-nw7mt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 kubectl -- exec busybox-7b57f96db7-nzzt8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 kubectl -- exec busybox-7b57f96db7-dmhqv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 kubectl -- exec busybox-7b57f96db7-nw7mt -- nslookup kubernetes.default.svc.cluster.local
E1217 00:18:24.450312   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-396394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 kubectl -- exec busybox-7b57f96db7-nzzt8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 kubectl -- exec busybox-7b57f96db7-dmhqv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 kubectl -- exec busybox-7b57f96db7-dmhqv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 kubectl -- exec busybox-7b57f96db7-nw7mt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 kubectl -- exec busybox-7b57f96db7-nw7mt -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 kubectl -- exec busybox-7b57f96db7-nzzt8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 kubectl -- exec busybox-7b57f96db7-nzzt8 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-557476 node add --alsologtostderr -v 5: (22.513726667s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-557476 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 cp testdata/cp-test.txt ha-557476:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 cp ha-557476:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile668169240/001/cp-test_ha-557476.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 cp ha-557476:/home/docker/cp-test.txt ha-557476-m02:/home/docker/cp-test_ha-557476_ha-557476-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m02 "sudo cat /home/docker/cp-test_ha-557476_ha-557476-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 cp ha-557476:/home/docker/cp-test.txt ha-557476-m03:/home/docker/cp-test_ha-557476_ha-557476-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m03 "sudo cat /home/docker/cp-test_ha-557476_ha-557476-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 cp ha-557476:/home/docker/cp-test.txt ha-557476-m04:/home/docker/cp-test_ha-557476_ha-557476-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m04 "sudo cat /home/docker/cp-test_ha-557476_ha-557476-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 cp testdata/cp-test.txt ha-557476-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 cp ha-557476-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile668169240/001/cp-test_ha-557476-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 cp ha-557476-m02:/home/docker/cp-test.txt ha-557476:/home/docker/cp-test_ha-557476-m02_ha-557476.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476 "sudo cat /home/docker/cp-test_ha-557476-m02_ha-557476.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 cp ha-557476-m02:/home/docker/cp-test.txt ha-557476-m03:/home/docker/cp-test_ha-557476-m02_ha-557476-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m03 "sudo cat /home/docker/cp-test_ha-557476-m02_ha-557476-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 cp ha-557476-m02:/home/docker/cp-test.txt ha-557476-m04:/home/docker/cp-test_ha-557476-m02_ha-557476-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m04 "sudo cat /home/docker/cp-test_ha-557476-m02_ha-557476-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 cp testdata/cp-test.txt ha-557476-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 cp ha-557476-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile668169240/001/cp-test_ha-557476-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 cp ha-557476-m03:/home/docker/cp-test.txt ha-557476:/home/docker/cp-test_ha-557476-m03_ha-557476.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476 "sudo cat /home/docker/cp-test_ha-557476-m03_ha-557476.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 cp ha-557476-m03:/home/docker/cp-test.txt ha-557476-m02:/home/docker/cp-test_ha-557476-m03_ha-557476-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m02 "sudo cat /home/docker/cp-test_ha-557476-m03_ha-557476-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 cp ha-557476-m03:/home/docker/cp-test.txt ha-557476-m04:/home/docker/cp-test_ha-557476-m03_ha-557476-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m04 "sudo cat /home/docker/cp-test_ha-557476-m03_ha-557476-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 cp testdata/cp-test.txt ha-557476-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 cp ha-557476-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile668169240/001/cp-test_ha-557476-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 cp ha-557476-m04:/home/docker/cp-test.txt ha-557476:/home/docker/cp-test_ha-557476-m04_ha-557476.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476 "sudo cat /home/docker/cp-test_ha-557476-m04_ha-557476.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 cp ha-557476-m04:/home/docker/cp-test.txt ha-557476-m02:/home/docker/cp-test_ha-557476-m04_ha-557476-m02.txt
E1217 00:19:05.412134   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-396394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m02 "sudo cat /home/docker/cp-test_ha-557476-m04_ha-557476-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 cp ha-557476-m04:/home/docker/cp-test.txt ha-557476-m03:/home/docker/cp-test_ha-557476-m04_ha-557476-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 ssh -n ha-557476-m03 "sudo cat /home/docker/cp-test_ha-557476-m04_ha-557476-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-557476 node stop m02 --alsologtostderr -v 5: (13.560088223s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-557476 status --alsologtostderr -v 5: exit status 7 (662.387815ms)

                                                
                                                
-- stdout --
	ha-557476
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-557476-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-557476-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-557476-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:19:20.646342   96277 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:19:20.646579   96277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:19:20.646588   96277 out.go:374] Setting ErrFile to fd 2...
	I1217 00:19:20.646592   96277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:19:20.646761   96277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:19:20.646968   96277 out.go:368] Setting JSON to false
	I1217 00:19:20.647010   96277 mustload.go:66] Loading cluster: ha-557476
	I1217 00:19:20.647139   96277 notify.go:221] Checking for updates...
	I1217 00:19:20.647425   96277 config.go:182] Loaded profile config "ha-557476": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:19:20.647440   96277 status.go:174] checking status of ha-557476 ...
	I1217 00:19:20.647889   96277 cli_runner.go:164] Run: docker container inspect ha-557476 --format={{.State.Status}}
	I1217 00:19:20.666573   96277 status.go:371] ha-557476 host status = "Running" (err=<nil>)
	I1217 00:19:20.666593   96277 host.go:66] Checking if "ha-557476" exists ...
	I1217 00:19:20.666851   96277 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-557476
	I1217 00:19:20.685320   96277 host.go:66] Checking if "ha-557476" exists ...
	I1217 00:19:20.685601   96277 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:19:20.685648   96277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-557476
	I1217 00:19:20.702904   96277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/ha-557476/id_rsa Username:docker}
	I1217 00:19:20.794185   96277 ssh_runner.go:195] Run: systemctl --version
	I1217 00:19:20.800219   96277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:19:20.812817   96277 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:19:20.871214   96277 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-17 00:19:20.861686454 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:19:20.871694   96277 kubeconfig.go:125] found "ha-557476" server: "https://192.168.49.254:8443"
	I1217 00:19:20.871727   96277 api_server.go:166] Checking apiserver status ...
	I1217 00:19:20.871768   96277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:19:20.883398   96277 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1265/cgroup
	W1217 00:19:20.891604   96277 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1265/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:19:20.891658   96277 ssh_runner.go:195] Run: ls
	I1217 00:19:20.895753   96277 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1217 00:19:20.899885   96277 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1217 00:19:20.899904   96277 status.go:463] ha-557476 apiserver status = Running (err=<nil>)
	I1217 00:19:20.899911   96277 status.go:176] ha-557476 status: &{Name:ha-557476 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 00:19:20.899925   96277 status.go:174] checking status of ha-557476-m02 ...
	I1217 00:19:20.900181   96277 cli_runner.go:164] Run: docker container inspect ha-557476-m02 --format={{.State.Status}}
	I1217 00:19:20.918076   96277 status.go:371] ha-557476-m02 host status = "Stopped" (err=<nil>)
	I1217 00:19:20.918092   96277 status.go:384] host is not running, skipping remaining checks
	I1217 00:19:20.918098   96277 status.go:176] ha-557476-m02 status: &{Name:ha-557476-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 00:19:20.918120   96277 status.go:174] checking status of ha-557476-m03 ...
	I1217 00:19:20.918341   96277 cli_runner.go:164] Run: docker container inspect ha-557476-m03 --format={{.State.Status}}
	I1217 00:19:20.936434   96277 status.go:371] ha-557476-m03 host status = "Running" (err=<nil>)
	I1217 00:19:20.936452   96277 host.go:66] Checking if "ha-557476-m03" exists ...
	I1217 00:19:20.936706   96277 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-557476-m03
	I1217 00:19:20.953151   96277 host.go:66] Checking if "ha-557476-m03" exists ...
	I1217 00:19:20.953391   96277 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:19:20.953421   96277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-557476-m03
	I1217 00:19:20.969206   96277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/ha-557476-m03/id_rsa Username:docker}
	I1217 00:19:21.057883   96277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:19:21.069834   96277 kubeconfig.go:125] found "ha-557476" server: "https://192.168.49.254:8443"
	I1217 00:19:21.069859   96277 api_server.go:166] Checking apiserver status ...
	I1217 00:19:21.069902   96277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:19:21.080148   96277 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1181/cgroup
	W1217 00:19:21.088393   96277 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1181/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:19:21.088427   96277 ssh_runner.go:195] Run: ls
	I1217 00:19:21.091743   96277 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1217 00:19:21.095925   96277 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1217 00:19:21.095941   96277 status.go:463] ha-557476-m03 apiserver status = Running (err=<nil>)
	I1217 00:19:21.095948   96277 status.go:176] ha-557476-m03 status: &{Name:ha-557476-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 00:19:21.095960   96277 status.go:174] checking status of ha-557476-m04 ...
	I1217 00:19:21.096174   96277 cli_runner.go:164] Run: docker container inspect ha-557476-m04 --format={{.State.Status}}
	I1217 00:19:21.114270   96277 status.go:371] ha-557476-m04 host status = "Running" (err=<nil>)
	I1217 00:19:21.114294   96277 host.go:66] Checking if "ha-557476-m04" exists ...
	I1217 00:19:21.114619   96277 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-557476-m04
	I1217 00:19:21.131575   96277 host.go:66] Checking if "ha-557476-m04" exists ...
	I1217 00:19:21.131872   96277 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:19:21.131937   96277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-557476-m04
	I1217 00:19:21.148868   96277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/ha-557476-m04/id_rsa Username:docker}
	I1217 00:19:21.237579   96277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:19:21.249418   96277 status.go:176] ha-557476-m04 status: &{Name:ha-557476-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-557476 node start m02 --alsologtostderr -v 5: (7.722675362s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (103.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-557476 stop --alsologtostderr -v 5: (44.210842496s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 start --wait true --alsologtostderr -v 5
E1217 00:20:27.334375   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-396394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:20:48.107175   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-484344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:20:48.113571   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-484344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:20:48.124900   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-484344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:20:48.146331   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-484344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:20:48.187809   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-484344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:20:48.269357   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-484344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:20:48.431278   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-484344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:20:48.752556   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-484344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:20:49.394884   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-484344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:20:50.676543   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-484344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:20:53.238428   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-484344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:20:58.359942   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-484344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:21:08.601475   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-484344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-557476 start --wait true --alsologtostderr -v 5: (58.896097059s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (103.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-557476 node delete m03 --alsologtostderr -v 5: (9.67698392s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (48.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 stop --alsologtostderr -v 5
E1217 00:21:29.083214   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-484344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:21:46.780393   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:22:10.046338   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-484344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-557476 stop --alsologtostderr -v 5: (48.856525162s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-557476 status --alsologtostderr -v 5: exit status 7 (116.627755ms)

                                                
                                                
-- stdout --
	ha-557476
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-557476-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-557476-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:22:14.677754  110555 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:22:14.678048  110555 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:22:14.678058  110555 out.go:374] Setting ErrFile to fd 2...
	I1217 00:22:14.678061  110555 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:22:14.678297  110555 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:22:14.678507  110555 out.go:368] Setting JSON to false
	I1217 00:22:14.678533  110555 mustload.go:66] Loading cluster: ha-557476
	I1217 00:22:14.678634  110555 notify.go:221] Checking for updates...
	I1217 00:22:14.678980  110555 config.go:182] Loaded profile config "ha-557476": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:22:14.679006  110555 status.go:174] checking status of ha-557476 ...
	I1217 00:22:14.679461  110555 cli_runner.go:164] Run: docker container inspect ha-557476 --format={{.State.Status}}
	I1217 00:22:14.701409  110555 status.go:371] ha-557476 host status = "Stopped" (err=<nil>)
	I1217 00:22:14.701426  110555 status.go:384] host is not running, skipping remaining checks
	I1217 00:22:14.701431  110555 status.go:176] ha-557476 status: &{Name:ha-557476 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 00:22:14.701466  110555 status.go:174] checking status of ha-557476-m02 ...
	I1217 00:22:14.701717  110555 cli_runner.go:164] Run: docker container inspect ha-557476-m02 --format={{.State.Status}}
	I1217 00:22:14.720856  110555 status.go:371] ha-557476-m02 host status = "Stopped" (err=<nil>)
	I1217 00:22:14.720903  110555 status.go:384] host is not running, skipping remaining checks
	I1217 00:22:14.720911  110555 status.go:176] ha-557476-m02 status: &{Name:ha-557476-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 00:22:14.720940  110555 status.go:174] checking status of ha-557476-m04 ...
	I1217 00:22:14.721226  110555 cli_runner.go:164] Run: docker container inspect ha-557476-m04 --format={{.State.Status}}
	I1217 00:22:14.738345  110555 status.go:371] ha-557476-m04 host status = "Stopped" (err=<nil>)
	I1217 00:22:14.738364  110555 status.go:384] host is not running, skipping remaining checks
	I1217 00:22:14.738370  110555 status.go:176] ha-557476-m04 status: &{Name:ha-557476-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (48.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (58.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1217 00:22:43.475723   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-396394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:23:11.177418   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-396394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-557476 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (57.503983228s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (58.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (38.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 node add --control-plane --alsologtostderr -v 5
E1217 00:23:31.970501   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-484344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-557476 node add --control-plane --alsologtostderr -v 5: (37.973148262s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-557476 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (38.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.35s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-664600 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-664600 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (38.346340073s)
--- PASS: TestJSONOutput/start/Command (38.35s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.07s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-664600 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-664600 --output=json --user=testUser: (6.073243257s)
--- PASS: TestJSONOutput/stop/Command (6.07s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-701844 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-701844 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (73.505694ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a15ded8f-aa5f-4a5f-904a-de81aca31846","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-701844] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1daf4871-25ac-4c2d-bb0c-91cad6f8fdd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22168"}}
	{"specversion":"1.0","id":"111e88ae-7abe-4d55-81c8-ca0684ede871","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8420de31-a748-4739-a0c9-ef71db9dac1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig"}}
	{"specversion":"1.0","id":"7fa82545-6aec-49c8-8340-8b0dfe86c1e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube"}}
	{"specversion":"1.0","id":"6e80cddb-f990-4750-926e-fadbf1140262","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e1de14e9-f271-48d8-9c2e-1618c9e35628","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2f9a44ac-4f1e-4d69-9cd4-8183dff08b90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-701844" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-701844
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.4s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-592377 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-592377 --network=: (25.267382871s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-592377" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-592377
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-592377: (2.110031739s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.40s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (20.72s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-479990 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-479990 --network=bridge: (18.691760401s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-479990" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-479990
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-479990: (2.009687746s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (20.72s)

                                                
                                    
x
+
TestKicExistingNetwork (22.45s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1217 00:25:42.554284   16354 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1217 00:25:42.570488   16354 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1217 00:25:42.570542   16354 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1217 00:25:42.570561   16354 cli_runner.go:164] Run: docker network inspect existing-network
W1217 00:25:42.588757   16354 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1217 00:25:42.588781   16354 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1217 00:25:42.588800   16354 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1217 00:25:42.588912   16354 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1217 00:25:42.605056   16354 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6ffd1d738f01 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3d:52:75:47:82} reservation:<nil>}
I1217 00:25:42.605507   16354 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d39400}
I1217 00:25:42.605540   16354 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1217 00:25:42.605586   16354 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1217 00:25:42.650919   16354 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-942243 --network=existing-network
E1217 00:25:48.107957   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-484344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-942243 --network=existing-network: (20.345504392s)
helpers_test.go:176: Cleaning up "existing-network-942243" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-942243
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-942243: (1.972621947s)
I1217 00:26:04.985734   16354 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (22.45s)

                                                
                                    
x
+
TestKicCustomSubnet (25.93s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-370647 --subnet=192.168.60.0/24
E1217 00:26:15.813133   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-484344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-370647 --subnet=192.168.60.0/24: (23.780934149s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-370647 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-370647" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-370647
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-370647: (2.130817745s)
--- PASS: TestKicCustomSubnet (25.93s)

                                                
                                    
x
+
TestKicStaticIP (22.59s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-612488 --static-ip=192.168.200.200
E1217 00:26:46.782021   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-612488 --static-ip=192.168.200.200: (20.34183921s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-612488 ip
helpers_test.go:176: Cleaning up "static-ip-612488" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-612488
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-612488: (2.095584056s)
--- PASS: TestKicStaticIP (22.59s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (44.85s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-882553 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-882553 --driver=docker  --container-runtime=crio: (21.252383765s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-885502 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-885502 --driver=docker  --container-runtime=crio: (17.785500942s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-882553
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-885502
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-885502" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-885502
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-885502: (2.325922942s)
helpers_test.go:176: Cleaning up "first-882553" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-882553
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-882553: (2.297817176s)
--- PASS: TestMinikubeProfile (44.85s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-359804 --memory=3072 --mount-string /tmp/TestMountStartserial1107015647/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1217 00:27:43.479195   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-396394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-359804 --memory=3072 --mount-string /tmp/TestMountStartserial1107015647/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.735559156s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-359804 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-371379 --memory=3072 --mount-string /tmp/TestMountStartserial1107015647/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-371379 --memory=3072 --mount-string /tmp/TestMountStartserial1107015647/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.563858687s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-371379 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-359804 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-359804 --alsologtostderr -v=5: (1.662361442s)
--- PASS: TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-371379 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-371379
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-371379: (1.252783727s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.11s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-371379
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-371379: (6.104754228s)
--- PASS: TestMountStart/serial/RestartStopped (7.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-371379 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (58.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-535560 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1217 00:28:09.846113   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-535560 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (57.927769537s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (58.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535560 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535560 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-535560 -- rollout status deployment/busybox: (1.878332524s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535560 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535560 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535560 -- exec busybox-7b57f96db7-5c4c4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535560 -- exec busybox-7b57f96db7-bcr66 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535560 -- exec busybox-7b57f96db7-5c4c4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535560 -- exec busybox-7b57f96db7-bcr66 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535560 -- exec busybox-7b57f96db7-5c4c4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535560 -- exec busybox-7b57f96db7-bcr66 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.45s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535560 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535560 -- exec busybox-7b57f96db7-5c4c4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535560 -- exec busybox-7b57f96db7-5c4c4 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535560 -- exec busybox-7b57f96db7-bcr66 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535560 -- exec busybox-7b57f96db7-bcr66 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (55.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-535560 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-535560 -v=5 --alsologtostderr: (54.689555874s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (55.31s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-535560 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 cp testdata/cp-test.txt multinode-535560:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 ssh -n multinode-535560 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 cp multinode-535560:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3116930920/001/cp-test_multinode-535560.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 ssh -n multinode-535560 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 cp multinode-535560:/home/docker/cp-test.txt multinode-535560-m02:/home/docker/cp-test_multinode-535560_multinode-535560-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 ssh -n multinode-535560 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 ssh -n multinode-535560-m02 "sudo cat /home/docker/cp-test_multinode-535560_multinode-535560-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 cp multinode-535560:/home/docker/cp-test.txt multinode-535560-m03:/home/docker/cp-test_multinode-535560_multinode-535560-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 ssh -n multinode-535560 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 ssh -n multinode-535560-m03 "sudo cat /home/docker/cp-test_multinode-535560_multinode-535560-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 cp testdata/cp-test.txt multinode-535560-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 ssh -n multinode-535560-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 cp multinode-535560-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3116930920/001/cp-test_multinode-535560-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 ssh -n multinode-535560-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 cp multinode-535560-m02:/home/docker/cp-test.txt multinode-535560:/home/docker/cp-test_multinode-535560-m02_multinode-535560.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 ssh -n multinode-535560-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 ssh -n multinode-535560 "sudo cat /home/docker/cp-test_multinode-535560-m02_multinode-535560.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 cp multinode-535560-m02:/home/docker/cp-test.txt multinode-535560-m03:/home/docker/cp-test_multinode-535560-m02_multinode-535560-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 ssh -n multinode-535560-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 ssh -n multinode-535560-m03 "sudo cat /home/docker/cp-test_multinode-535560-m02_multinode-535560-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 cp testdata/cp-test.txt multinode-535560-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 ssh -n multinode-535560-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 cp multinode-535560-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3116930920/001/cp-test_multinode-535560-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 ssh -n multinode-535560-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 cp multinode-535560-m03:/home/docker/cp-test.txt multinode-535560:/home/docker/cp-test_multinode-535560-m03_multinode-535560.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 ssh -n multinode-535560-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 ssh -n multinode-535560 "sudo cat /home/docker/cp-test_multinode-535560-m03_multinode-535560.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 cp multinode-535560-m03:/home/docker/cp-test.txt multinode-535560-m02:/home/docker/cp-test_multinode-535560-m03_multinode-535560-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 ssh -n multinode-535560-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 ssh -n multinode-535560-m02 "sudo cat /home/docker/cp-test_multinode-535560-m03_multinode-535560-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-535560 node stop m03: (1.261360308s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-535560 status: exit status 7 (470.799939ms)

                                                
                                                
-- stdout --
	multinode-535560
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-535560-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-535560-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-535560 status --alsologtostderr: exit status 7 (474.001035ms)

                                                
                                                
-- stdout --
	multinode-535560
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-535560-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-535560-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:30:13.670653  170175 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:30:13.670737  170175 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:30:13.670744  170175 out.go:374] Setting ErrFile to fd 2...
	I1217 00:30:13.670749  170175 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:30:13.670935  170175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:30:13.671094  170175 out.go:368] Setting JSON to false
	I1217 00:30:13.671124  170175 mustload.go:66] Loading cluster: multinode-535560
	I1217 00:30:13.671261  170175 notify.go:221] Checking for updates...
	I1217 00:30:13.671591  170175 config.go:182] Loaded profile config "multinode-535560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:30:13.671610  170175 status.go:174] checking status of multinode-535560 ...
	I1217 00:30:13.672147  170175 cli_runner.go:164] Run: docker container inspect multinode-535560 --format={{.State.Status}}
	I1217 00:30:13.690465  170175 status.go:371] multinode-535560 host status = "Running" (err=<nil>)
	I1217 00:30:13.690483  170175 host.go:66] Checking if "multinode-535560" exists ...
	I1217 00:30:13.690700  170175 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-535560
	I1217 00:30:13.707322  170175 host.go:66] Checking if "multinode-535560" exists ...
	I1217 00:30:13.707589  170175 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:30:13.707630  170175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-535560
	I1217 00:30:13.725289  170175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/multinode-535560/id_rsa Username:docker}
	I1217 00:30:13.814458  170175 ssh_runner.go:195] Run: systemctl --version
	I1217 00:30:13.821096  170175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:30:13.833210  170175 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:30:13.890110  170175 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-17 00:30:13.880370373 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:30:13.890797  170175 kubeconfig.go:125] found "multinode-535560" server: "https://192.168.67.2:8443"
	I1217 00:30:13.890839  170175 api_server.go:166] Checking apiserver status ...
	I1217 00:30:13.890885  170175 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:30:13.901975  170175 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1236/cgroup
	W1217 00:30:13.910076  170175 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1236/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:30:13.910135  170175 ssh_runner.go:195] Run: ls
	I1217 00:30:13.913522  170175 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1217 00:30:13.917555  170175 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1217 00:30:13.917572  170175 status.go:463] multinode-535560 apiserver status = Running (err=<nil>)
	I1217 00:30:13.917580  170175 status.go:176] multinode-535560 status: &{Name:multinode-535560 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 00:30:13.917595  170175 status.go:174] checking status of multinode-535560-m02 ...
	I1217 00:30:13.917812  170175 cli_runner.go:164] Run: docker container inspect multinode-535560-m02 --format={{.State.Status}}
	I1217 00:30:13.935106  170175 status.go:371] multinode-535560-m02 host status = "Running" (err=<nil>)
	I1217 00:30:13.935128  170175 host.go:66] Checking if "multinode-535560-m02" exists ...
	I1217 00:30:13.935354  170175 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-535560-m02
	I1217 00:30:13.954193  170175 host.go:66] Checking if "multinode-535560-m02" exists ...
	I1217 00:30:13.954420  170175 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:30:13.954455  170175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-535560-m02
	I1217 00:30:13.971036  170175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22168-12816/.minikube/machines/multinode-535560-m02/id_rsa Username:docker}
	I1217 00:30:14.058655  170175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:30:14.070151  170175 status.go:176] multinode-535560-m02 status: &{Name:multinode-535560-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1217 00:30:14.070187  170175 status.go:174] checking status of multinode-535560-m03 ...
	I1217 00:30:14.070508  170175 cli_runner.go:164] Run: docker container inspect multinode-535560-m03 --format={{.State.Status}}
	I1217 00:30:14.088320  170175 status.go:371] multinode-535560-m03 host status = "Stopped" (err=<nil>)
	I1217 00:30:14.088340  170175 status.go:384] host is not running, skipping remaining checks
	I1217 00:30:14.088345  170175 status.go:176] multinode-535560-m03 status: &{Name:multinode-535560-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-535560 node start m03 -v=5 --alsologtostderr: (6.31440951s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-535560
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-535560
E1217 00:30:48.107299   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-484344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-535560: (31.287638255s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-535560 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-535560 --wait=true -v=5 --alsologtostderr: (50.402842296s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-535560
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 node delete m03
E1217 00:31:46.780200   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-535560 node delete m03: (4.485479069s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-535560 stop: (30.084202699s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-535560 status: exit status 7 (95.648895ms)

                                                
                                                
-- stdout --
	multinode-535560
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-535560-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-535560 status --alsologtostderr: exit status 7 (94.150044ms)

                                                
                                                
-- stdout --
	multinode-535560
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-535560-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:32:18.174927  179932 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:32:18.175199  179932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:18.175210  179932 out.go:374] Setting ErrFile to fd 2...
	I1217 00:32:18.175214  179932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:18.175476  179932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:32:18.175683  179932 out.go:368] Setting JSON to false
	I1217 00:32:18.175710  179932 mustload.go:66] Loading cluster: multinode-535560
	I1217 00:32:18.175828  179932 notify.go:221] Checking for updates...
	I1217 00:32:18.176228  179932 config.go:182] Loaded profile config "multinode-535560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:32:18.176250  179932 status.go:174] checking status of multinode-535560 ...
	I1217 00:32:18.176877  179932 cli_runner.go:164] Run: docker container inspect multinode-535560 --format={{.State.Status}}
	I1217 00:32:18.196030  179932 status.go:371] multinode-535560 host status = "Stopped" (err=<nil>)
	I1217 00:32:18.196079  179932 status.go:384] host is not running, skipping remaining checks
	I1217 00:32:18.196090  179932 status.go:176] multinode-535560 status: &{Name:multinode-535560 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 00:32:18.196119  179932 status.go:174] checking status of multinode-535560-m02 ...
	I1217 00:32:18.196368  179932 cli_runner.go:164] Run: docker container inspect multinode-535560-m02 --format={{.State.Status}}
	I1217 00:32:18.212454  179932 status.go:371] multinode-535560-m02 host status = "Stopped" (err=<nil>)
	I1217 00:32:18.212478  179932 status.go:384] host is not running, skipping remaining checks
	I1217 00:32:18.212483  179932 status.go:176] multinode-535560-m02 status: &{Name:multinode-535560-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (24.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-535560 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-535560 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (23.622470539s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535560 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (24.19s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-535560
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-535560-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-535560-m02 --driver=docker  --container-runtime=crio: exit status 14 (73.329875ms)

                                                
                                                
-- stdout --
	* [multinode-535560-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-535560-m02' is duplicated with machine name 'multinode-535560-m02' in profile 'multinode-535560'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-535560-m03 --driver=docker  --container-runtime=crio
E1217 00:32:43.475572   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-396394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-535560-m03 --driver=docker  --container-runtime=crio: (23.191636704s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-535560
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-535560: exit status 80 (277.613111ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-535560 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-535560-m03 already exists in multinode-535560-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-535560-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-535560-m03: (2.347072688s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.95s)

                                                
                                    
x
+
TestPreload (78.93s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-931874 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-931874 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (44.018613939s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-931874 image pull gcr.io/k8s-minikube/busybox
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-931874
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-931874: (6.155708045s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-931874 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1217 00:34:06.540508   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-396394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-931874 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (25.38384277s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-931874 image list
helpers_test.go:176: Cleaning up "test-preload-931874" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-931874
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-931874: (2.357680918s)
--- PASS: TestPreload (78.93s)

                                                
                                    
x
+
TestScheduledStopUnix (95.23s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-123503 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-123503 --memory=3072 --driver=docker  --container-runtime=crio: (19.098902826s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-123503 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 00:34:50.569304  196807 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:34:50.569532  196807 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:34:50.569540  196807 out.go:374] Setting ErrFile to fd 2...
	I1217 00:34:50.569544  196807 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:34:50.569730  196807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:34:50.569941  196807 out.go:368] Setting JSON to false
	I1217 00:34:50.570051  196807 mustload.go:66] Loading cluster: scheduled-stop-123503
	I1217 00:34:50.570332  196807 config.go:182] Loaded profile config "scheduled-stop-123503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:34:50.570392  196807 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/scheduled-stop-123503/config.json ...
	I1217 00:34:50.570567  196807 mustload.go:66] Loading cluster: scheduled-stop-123503
	I1217 00:34:50.570659  196807 config.go:182] Loaded profile config "scheduled-stop-123503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-123503 -n scheduled-stop-123503
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-123503 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 00:34:50.941204  196959 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:34:50.941460  196959 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:34:50.941469  196959 out.go:374] Setting ErrFile to fd 2...
	I1217 00:34:50.941474  196959 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:34:50.941667  196959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:34:50.941883  196959 out.go:368] Setting JSON to false
	I1217 00:34:50.942097  196959 daemonize_unix.go:73] killing process 196844 as it is an old scheduled stop
	I1217 00:34:50.942209  196959 mustload.go:66] Loading cluster: scheduled-stop-123503
	I1217 00:34:50.942551  196959 config.go:182] Loaded profile config "scheduled-stop-123503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:34:50.942617  196959 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/scheduled-stop-123503/config.json ...
	I1217 00:34:50.942807  196959 mustload.go:66] Loading cluster: scheduled-stop-123503
	I1217 00:34:50.942895  196959 config.go:182] Loaded profile config "scheduled-stop-123503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1217 00:34:50.946932   16354 retry.go:31] will retry after 64.813µs: open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/scheduled-stop-123503/pid: no such file or directory
I1217 00:34:50.948068   16354 retry.go:31] will retry after 130.093µs: open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/scheduled-stop-123503/pid: no such file or directory
I1217 00:34:50.949229   16354 retry.go:31] will retry after 173.65µs: open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/scheduled-stop-123503/pid: no such file or directory
I1217 00:34:50.950367   16354 retry.go:31] will retry after 180.954µs: open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/scheduled-stop-123503/pid: no such file or directory
I1217 00:34:50.951499   16354 retry.go:31] will retry after 551.611µs: open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/scheduled-stop-123503/pid: no such file or directory
I1217 00:34:50.952626   16354 retry.go:31] will retry after 827.951µs: open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/scheduled-stop-123503/pid: no such file or directory
I1217 00:34:50.953746   16354 retry.go:31] will retry after 1.392798ms: open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/scheduled-stop-123503/pid: no such file or directory
I1217 00:34:50.955933   16354 retry.go:31] will retry after 1.216959ms: open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/scheduled-stop-123503/pid: no such file or directory
I1217 00:34:50.958127   16354 retry.go:31] will retry after 1.926509ms: open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/scheduled-stop-123503/pid: no such file or directory
I1217 00:34:50.960309   16354 retry.go:31] will retry after 4.296059ms: open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/scheduled-stop-123503/pid: no such file or directory
I1217 00:34:50.965506   16354 retry.go:31] will retry after 3.933584ms: open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/scheduled-stop-123503/pid: no such file or directory
I1217 00:34:50.969705   16354 retry.go:31] will retry after 7.088961ms: open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/scheduled-stop-123503/pid: no such file or directory
I1217 00:34:50.977891   16354 retry.go:31] will retry after 14.938489ms: open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/scheduled-stop-123503/pid: no such file or directory
I1217 00:34:50.993189   16354 retry.go:31] will retry after 10.537623ms: open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/scheduled-stop-123503/pid: no such file or directory
I1217 00:34:51.004443   16354 retry.go:31] will retry after 39.789807ms: open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/scheduled-stop-123503/pid: no such file or directory
I1217 00:34:51.044679   16354 retry.go:31] will retry after 51.337652ms: open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/scheduled-stop-123503/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-123503 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-123503 -n scheduled-stop-123503
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-123503
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-123503 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 00:35:16.824129  197601 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:35:16.824246  197601 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:35:16.824255  197601 out.go:374] Setting ErrFile to fd 2...
	I1217 00:35:16.824259  197601 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:35:16.824468  197601 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:35:16.824685  197601 out.go:368] Setting JSON to false
	I1217 00:35:16.824756  197601 mustload.go:66] Loading cluster: scheduled-stop-123503
	I1217 00:35:16.825086  197601 config.go:182] Loaded profile config "scheduled-stop-123503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:35:16.825146  197601 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/scheduled-stop-123503/config.json ...
	I1217 00:35:16.825329  197601 mustload.go:66] Loading cluster: scheduled-stop-123503
	I1217 00:35:16.825417  197601 config.go:182] Loaded profile config "scheduled-stop-123503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1217 00:35:48.107775   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-484344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-123503
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-123503: exit status 7 (76.627278ms)

                                                
                                                
-- stdout --
	scheduled-stop-123503
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-123503 -n scheduled-stop-123503
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-123503 -n scheduled-stop-123503: exit status 7 (74.818994ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-123503" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-123503
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-123503: (4.660240139s)
--- PASS: TestScheduledStopUnix (95.23s)

                                                
                                    
x
+
TestInsufficientStorage (8.73s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-503106 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-503106 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.289956833s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9a244704-65b0-4283-87ec-0e66998369ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-503106] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8fec5312-045e-4a12-a128-f6688c3a1898","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22168"}}
	{"specversion":"1.0","id":"6b13215d-c7da-4dac-8b9f-d0cd8d28002c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"60c04efe-3fab-4ebd-a889-9a434a34486e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig"}}
	{"specversion":"1.0","id":"21aaef3a-77e5-4059-9917-54a23c342134","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube"}}
	{"specversion":"1.0","id":"b16ce1ac-fccc-4af3-bdf9-cd9df7dbb9ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e3a53e52-9238-405f-bfdf-242c87abb22a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a443bc91-15ef-4525-97f8-ed97a9d7c9f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"48a0e475-67a5-486b-869d-8e2b43f38c25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7df78c07-a5d3-4e41-9d8b-e5a159d78eef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2eb7507b-ab90-4ce1-8b62-d8b2a8d7e0c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ce26f23a-e3f6-47e2-ac42-62adafb21f63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-503106\" primary control-plane node in \"insufficient-storage-503106\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6c4dd72d-c167-4f2f-a31d-9157b15be884","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765661130-22141 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"67999000-9836-460b-8c4b-022390a37bb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3101cfc1-bdb2-4d9a-bb30-c3cad26730a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-503106 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-503106 --output=json --layout=cluster: exit status 7 (277.366775ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-503106","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-503106","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 00:36:13.201562  200140 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-503106" does not appear in /home/jenkins/minikube-integration/22168-12816/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-503106 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-503106 --output=json --layout=cluster: exit status 7 (279.351825ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-503106","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-503106","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 00:36:13.481245  200249 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-503106" does not appear in /home/jenkins/minikube-integration/22168-12816/kubeconfig
	E1217 00:36:13.491251  200249 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/insufficient-storage-503106/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-503106" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-503106
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-503106: (1.885718293s)
--- PASS: TestInsufficientStorage (8.73s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (50.04s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3664074796 start -p running-upgrade-883165 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3664074796 start -p running-upgrade-883165 --memory=3072 --vm-driver=docker  --container-runtime=crio: (21.995995601s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-883165 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-883165 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.954174279s)
helpers_test.go:176: Cleaning up "running-upgrade-883165" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-883165
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-883165: (4.416984667s)
--- PASS: TestRunningBinaryUpgrade (50.04s)

                                                
                                    
x
+
TestKubernetesUpgrade (298.55s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.06301324s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-803959
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-803959: (2.320270271s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-803959 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-803959 status --format={{.Host}}: exit status 7 (87.049832ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1217 00:37:43.475158   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-396394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m21.366099352s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-803959 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (207.101447ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-803959] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-803959
	    minikube start -p kubernetes-upgrade-803959 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8039592 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-803959 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-803959 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.025495177s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-803959" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-803959
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-803959: (2.402894025s)
--- PASS: TestKubernetesUpgrade (298.55s)

                                                
                                    
x
+
TestMissingContainerUpgrade (99.3s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1502179881 start -p missing-upgrade-043393 --memory=3072 --driver=docker  --container-runtime=crio
E1217 00:36:46.780485   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1502179881 start -p missing-upgrade-043393 --memory=3072 --driver=docker  --container-runtime=crio: (43.866721154s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-043393
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-043393: (10.425171497s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-043393
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-043393 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1217 00:37:11.174472   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-484344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-043393 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.563415113s)
helpers_test.go:176: Cleaning up "missing-upgrade-043393" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-043393
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-043393: (2.621972111s)
--- PASS: TestMissingContainerUpgrade (99.30s)

                                                
                                    
x
+
TestPause/serial/Start (60.51s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-004564 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-004564 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m0.505008108s)
--- PASS: TestPause/serial/Start (60.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (302.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.354092936 start -p stopped-upgrade-028618 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.354092936 start -p stopped-upgrade-028618 --memory=3072 --vm-driver=docker  --container-runtime=crio: (42.763332581s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.354092936 -p stopped-upgrade-028618 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.354092936 -p stopped-upgrade-028618 stop: (1.323811382s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-028618 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-028618 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m18.57212098s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (302.71s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.27s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-004564 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-004564 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (8.26204774s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-375259 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-375259 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (78.004454ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-375259] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (23.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-375259 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-375259 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.640127615s)
no_kubernetes_test.go:226: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-375259 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (23.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-802249 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-802249 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (157.616315ms)

                                                
                                                
-- stdout --
	* [false-802249] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:38:25.930805  236152 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:38:25.931077  236152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:38:25.931086  236152 out.go:374] Setting ErrFile to fd 2...
	I1217 00:38:25.931091  236152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:38:25.931396  236152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12816/.minikube/bin
	I1217 00:38:25.931880  236152 out.go:368] Setting JSON to false
	I1217 00:38:25.933050  236152 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4856,"bootTime":1765927050,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:38:25.933103  236152 start.go:143] virtualization: kvm guest
	I1217 00:38:25.934937  236152 out.go:179] * [false-802249] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:38:25.936317  236152 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:38:25.936516  236152 notify.go:221] Checking for updates...
	I1217 00:38:25.938939  236152 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:38:25.940106  236152 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12816/kubeconfig
	I1217 00:38:25.941144  236152 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12816/.minikube
	I1217 00:38:25.942272  236152 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:38:25.943434  236152 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:38:25.945103  236152 config.go:182] Loaded profile config "NoKubernetes-375259": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:38:25.945276  236152 config.go:182] Loaded profile config "kubernetes-upgrade-803959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:38:25.945404  236152 config.go:182] Loaded profile config "stopped-upgrade-028618": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1217 00:38:25.945515  236152 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:38:25.969634  236152 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 00:38:25.969699  236152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:38:26.025276  236152 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-17 00:38:26.014600292 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 00:38:26.025366  236152 docker.go:319] overlay module found
	I1217 00:38:26.027809  236152 out.go:179] * Using the docker driver based on user configuration
	I1217 00:38:26.028970  236152 start.go:309] selected driver: docker
	I1217 00:38:26.028988  236152 start.go:927] validating driver "docker" against <nil>
	I1217 00:38:26.029029  236152 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:38:26.030545  236152 out.go:203] 
	W1217 00:38:26.031522  236152 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1217 00:38:26.032556  236152 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-802249 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-802249

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-802249

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-802249

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-802249

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-802249

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-802249

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-802249

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-802249

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-802249

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-802249

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-802249

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-802249" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-802249" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 00:37:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-803959
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 00:37:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-028618
contexts:
- context:
cluster: kubernetes-upgrade-803959
user: kubernetes-upgrade-803959
name: kubernetes-upgrade-803959
- context:
cluster: stopped-upgrade-028618
user: stopped-upgrade-028618
name: stopped-upgrade-028618
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-803959
user:
client-certificate: /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/client.crt
client-key: /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/client.key
- name: stopped-upgrade-028618
user:
client-certificate: /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/stopped-upgrade-028618/client.crt
client-key: /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/stopped-upgrade-028618/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-802249

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-802249"

                                                
                                                
----------------------- debugLogs end: false-802249 [took: 3.074897762s] --------------------------------
helpers_test.go:176: Cleaning up "false-802249" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-802249
--- PASS: TestNetworkPlugins/group/false (3.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (15.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-375259 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-375259 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (13.612470302s)
no_kubernetes_test.go:226: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-375259 status -o json
no_kubernetes_test.go:226: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-375259 status -o json: exit status 2 (297.073297ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-375259","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-375259
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-375259: (2.000130395s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (15.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:162: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-375259 --no-kubernetes --cpus=1 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:162: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-375259 --no-kubernetes --cpus=1 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.759385468s)
--- PASS: TestNoKubernetes/serial/Start (6.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22168-12816/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:173: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-375259 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-375259 "sudo systemctl is-active --quiet service kubelet": exit status 1 (266.473875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (14.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:195: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:195: (dbg) Done: out/minikube-linux-amd64 profile list: (13.911743792s)
no_kubernetes_test.go:205: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (14.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:184: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-375259
no_kubernetes_test.go:184: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-375259: (1.269233025s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:217: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-375259 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:217: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-375259 --driver=docker  --container-runtime=crio: (7.095418979s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:173: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-375259 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-375259 "sudo systemctl is-active --quiet service kubelet": exit status 1 (263.45618ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (49.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-742860 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1217 00:40:48.107861   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-484344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-742860 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.802241034s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (49.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-742860 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [80cfe0a3-1fd0-46e2-90ad-14f7c908c862] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [80cfe0a3-1fd0-46e2-90ad-14f7c908c862] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.00324974s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-742860 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-742860 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-742860 --alsologtostderr -v=3: (16.960137722s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-028618
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-028618: (1.025240442s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (45.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-864613 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-864613 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (45.096460546s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (45.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-742860 -n old-k8s-version-742860
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-742860 -n old-k8s-version-742860: exit status 7 (89.734843ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-742860 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (43.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-742860 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1217 00:41:46.779939   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-742860 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (43.101094065s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-742860 -n old-k8s-version-742860
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (43.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (41.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-153232 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-153232 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (41.668074181s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (41.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-864613 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a45ae093-ee01-4707-9a4c-570ad3b0770c] Pending
helpers_test.go:353: "busybox" [a45ae093-ee01-4707-9a4c-570ad3b0770c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [a45ae093-ee01-4707-9a4c-570ad3b0770c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.004303567s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-864613 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-414413 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-414413 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (42.41687081s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (17.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-864613 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-864613 --alsologtostderr -v=3: (17.750043954s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (17.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-hl62s" [ea6229a9-6cc8-4a75-a422-59e0f08b134d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004548301s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-hl62s" [ea6229a9-6cc8-4a75-a422-59e0f08b134d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003191852s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-742860 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-742860 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-864613 -n no-preload-864613
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-864613 -n no-preload-864613: exit status 7 (100.058904ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-864613 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (52.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-864613 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-864613 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (51.59912638s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-864613 -n no-preload-864613
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (52.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (23.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-653717 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1217 00:42:43.475125   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-396394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-653717 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (23.499051967s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (23.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-153232 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [2ded7c57-a893-4051-8499-a73941ba914b] Pending
helpers_test.go:353: "busybox" [2ded7c57-a893-4051-8499-a73941ba914b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [2ded7c57-a893-4051-8499-a73941ba914b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.003837208s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-153232 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-153232 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-153232 --alsologtostderr -v=3: (18.230673468s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-414413 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [48df1bad-87f8-4fbe-aa86-221abf160bdd] Pending
helpers_test.go:353: "busybox" [48df1bad-87f8-4fbe-aa86-221abf160bdd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [48df1bad-87f8-4fbe-aa86-221abf160bdd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004099824s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-414413 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-653717 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-653717 --alsologtostderr -v=3: (2.761307956s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-414413 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-414413 --alsologtostderr -v=3: (18.167168519s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-653717 -n newest-cni-653717
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-653717 -n newest-cni-653717: exit status 7 (76.960911ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-653717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-653717 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-653717 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (9.765168915s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-653717 -n newest-cni-653717
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-153232 -n embed-certs-153232
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-153232 -n embed-certs-153232: exit status 7 (82.98184ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-153232 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (47.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-153232 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-153232 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (47.372037896s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-153232 -n embed-certs-153232
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (47.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-653717 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-414413 -n default-k8s-diff-port-414413
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-414413 -n default-k8s-diff-port-414413: exit status 7 (80.265885ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-414413 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-414413 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-414413 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (51.635867633s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-414413 -n default-k8s-diff-port-414413
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-nrnvc" [853ce0a2-7658-4941-8a3a-39c71a8b6607] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003715353s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (71.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m11.609631794s)
--- PASS: TestNetworkPlugins/group/auto/Start (71.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-nrnvc" [853ce0a2-7658-4941-8a3a-39c71a8b6607] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004405657s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-864613 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-864613 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (37.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (37.754727363s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (37.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-472j2" [9f5811f0-bf00-4b4b-a326-a1e04c616776] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003718644s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-472j2" [9f5811f0-bf00-4b4b-a326-a1e04c616776] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003211267s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-153232 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-153232 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-wnwc6" [bd229193-f29a-44ac-a723-c842b5034e75] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003712842s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (49.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (49.176236161s)
--- PASS: TestNetworkPlugins/group/calico/Start (49.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-wnwc6" [bd229193-f29a-44ac-a723-c842b5034e75] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003600668s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-414413 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-c6fx4" [fc92c1bc-53fb-49d3-92c2-0c698d1961fb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004210607s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-414413 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-802249 "pgrep -a kubelet"
I1217 00:44:32.793871   16354 config.go:182] Loaded profile config "kindnet-802249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-802249 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-pk6q5" [0228635a-9bb0-4f58-82d7-577a5a013315] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-pk6q5" [0228635a-9bb0-4f58-82d7-577a5a013315] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003857441s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (56.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (56.569323821s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (56.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-802249 "pgrep -a kubelet"
I1217 00:44:39.670625   16354 config.go:182] Loaded profile config "auto-802249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-802249 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-jn6wb" [b207c19e-6c0d-4edc-a0ee-87bb5e76cc18] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-jn6wb" [b207c19e-6c0d-4edc-a0ee-87bb5e76cc18] Running
E1217 00:44:49.848375   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/addons-401977/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004138893s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-802249 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-802249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-802249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-802249 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-802249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-802249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (67.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m7.972912541s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (67.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-tfkvw" [ca96fe03-28e9-4b88-ad7e-032ef26458ad] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00348403s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (51.54804087s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-802249 "pgrep -a kubelet"
I1217 00:45:15.931708   16354 config.go:182] Loaded profile config "calico-802249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-802249 replace --force -f testdata/netcat-deployment.yaml
I1217 00:45:16.469626   16354 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1217 00:45:16.497643   16354 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-9cqgx" [55c72eee-6b5d-4e18-9021-26cab00bcda5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-9cqgx" [55c72eee-6b5d-4e18-9021-26cab00bcda5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003607324s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-802249 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-802249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-802249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-802249 "pgrep -a kubelet"
I1217 00:45:33.647514   16354 config.go:182] Loaded profile config "custom-flannel-802249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-802249 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-78dzz" [6dc716be-5942-475d-b049-1c0a04655295] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-78dzz" [6dc716be-5942-475d-b049-1c0a04655295] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.003431342s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-802249 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-802249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-802249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (63.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1217 00:45:48.107221   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/functional-484344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-802249 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m3.202981515s)
--- PASS: TestNetworkPlugins/group/bridge/Start (63.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-k2k4n" [f80f8577-b1ee-4f03-8a2f-e054e2fb7403] Running
E1217 00:46:08.847137   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:46:08.853609   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:46:08.865098   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:46:08.886479   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:46:08.927891   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:46:09.009322   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0029451s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-802249 "pgrep -a kubelet"
E1217 00:46:09.171044   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1217 00:46:09.377602   16354 config.go:182] Loaded profile config "flannel-802249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-802249 replace --force -f testdata/netcat-deployment.yaml
E1217 00:46:09.492555   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-jrlsl" [edb816e4-ec9c-4429-9753-74e26680faaa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1217 00:46:10.134469   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:46:11.416225   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-jrlsl" [edb816e4-ec9c-4429-9753-74e26680faaa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.00398209s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-802249 "pgrep -a kubelet"
I1217 00:46:12.866626   16354 config.go:182] Loaded profile config "enable-default-cni-802249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-802249 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-djf2w" [033599b0-6880-40ff-ae38-da121384a286] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1217 00:46:13.977863   16354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/old-k8s-version-742860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-djf2w" [033599b0-6880-40ff-ae38-da121384a286] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003619318s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-802249 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-802249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-802249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-802249 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-802249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-802249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-802249 "pgrep -a kubelet"
I1217 00:46:50.198463   16354 config.go:182] Loaded profile config "bridge-802249": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-802249 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-j42lt" [55e5442e-1d58-412d-b5ef-98346de85627] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-j42lt" [55e5442e-1d58-412d-b5ef-98346de85627] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.005361432s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-802249 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-802249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-802249 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                    

Test skip (34/415)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
372 TestStartStop/group/disable-driver-mounts 0.17
379 TestNetworkPlugins/group/kubenet 3.41
387 TestNetworkPlugins/group/cilium 3.63
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-827138" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-827138
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-802249 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-802249

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-802249

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-802249

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-802249

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-802249

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-802249

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-802249

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-802249

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-802249

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-802249

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-802249

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-802249" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-802249" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 00:37:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-803959
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 00:37:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-028618
contexts:
- context:
cluster: kubernetes-upgrade-803959
user: kubernetes-upgrade-803959
name: kubernetes-upgrade-803959
- context:
cluster: stopped-upgrade-028618
user: stopped-upgrade-028618
name: stopped-upgrade-028618
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-803959
user:
client-certificate: /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/client.crt
client-key: /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/client.key
- name: stopped-upgrade-028618
user:
client-certificate: /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/stopped-upgrade-028618/client.crt
client-key: /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/stopped-upgrade-028618/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-802249

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-802249"

                                                
                                                
----------------------- debugLogs end: kubenet-802249 [took: 3.236320664s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-802249" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-802249
--- SKIP: TestNetworkPlugins/group/kubenet (3.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-802249 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-802249

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-802249

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-802249

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-802249

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-802249

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-802249

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-802249

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-802249

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-802249

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-802249

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-802249

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-802249" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-802249

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-802249

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-802249

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-802249

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-802249" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-802249" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 00:37:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-803959
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22168-12816/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 00:37:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-028618
contexts:
- context:
cluster: kubernetes-upgrade-803959
user: kubernetes-upgrade-803959
name: kubernetes-upgrade-803959
- context:
cluster: stopped-upgrade-028618
user: stopped-upgrade-028618
name: stopped-upgrade-028618
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-803959
user:
client-certificate: /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/client.crt
client-key: /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/kubernetes-upgrade-803959/client.key
- name: stopped-upgrade-028618
user:
client-certificate: /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/stopped-upgrade-028618/client.crt
client-key: /home/jenkins/minikube-integration/22168-12816/.minikube/profiles/stopped-upgrade-028618/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-802249

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-802249" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-802249"

                                                
                                                
----------------------- debugLogs end: cilium-802249 [took: 3.46305584s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-802249" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-802249
--- SKIP: TestNetworkPlugins/group/cilium (3.63s)

                                                
                                    
Copied to clipboard